Categories
Misc

Developing Applications with NVIDIA BlueField DPU and NVIDIA DOCA Libraries

NVIDIA DOCA libraries simplify the development process of BlueField DPU applicationsThe development process for DPUs can get complex. This is where NVIDIA DOCA comes in. With several built-in libraries that allows for plug-n-play and simple application development. NVIDIA DOCA libraries simplify the development process of BlueField DPU applications

In this post, I take you through the creation of the FRR DOCA dataplane plugin and show you how to offload PBR rules using the new DOCA flow library. In the previous post, you saw the creation of a FRR dataplane plugin to accelerate PBR rules on BlueField using the DPDK rte_flow library. For part 1, see Developing Applications with NVIDIA BlueField DPU and DPDK.

Adding the DOCA dataplane plugin to Zebra 

I still used the DPDK APIs for hardware initialization, but then used the DOCA flow APIs for setting up the dataplane flow pipeline. To do that I had to link the DPDK (libdpdk.pc) and DOCA flow (doca-flow.pc) shared libraries to the DOCA dataplane plugin.

root@dpu-arm:~# export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/mellanox/dpdk/lib/aarch 
64-linux-gnu/pkgconfig 
root@dpu-arm:~# pkg-config --libs doca-flow 
-ldoca_flow 
root@dpu-arm:~# pkg-config --cflags doca-flow 
-DALLOW_EXPERIMENTAL_API -include rte_config.h -mcpu=cortex-a72 -DALLOW_EXPERIMENTAL_API -I/opt/mellanox/dpdk/include/dpdk -I/opt/mellanox/dpdk/include/dpdk/../aarch64-linux-gnu/dpdk -I/opt/mellanox/dpdk/include/dpdk -I/usr/include/libnl3 
root@dpu-arm:~# 

I added the pkg check-and-define macro for DPDK and DOCA flow in the FRR makefile (configure.ac).

if test "$enable_dp_doca" = "yes"; then 
  PKG_CHECK_MODULES([DOCA], [libdpdk doca-flow], [ 
    AC_DEFINE([HAVE_DOCA], [1], [Enable DOCA backend]) 
    DOCA=true 
  ], [ 
    AC_MSG_ERROR([configuration specifies --enable-dp-doca but DOCA libs were not found]) 
  ]) 
fi

I included both the DPDK and DOCA flow libs and cflags into the zebra-dp-doca make macro (zebra/subdir.am).

zebra_zebra_dplane_doca_la_CFLAGS = $(DOCA_CFLAGS) 
zebra_zebra_dplane_doca_la_LIBADD  = $(DOCA_LIBS) 

The DOCA dataplane plugin can be enabled when the FRR service is started using /etc/frr/daemons.

zebra_options= " -M dplane_doca -A 127.0.0.1"

Hardware initialization and port mapping 

Using the DPDK APIs, rte_eal_init and rte_eth_dev_info_get, for initializing the hardware and for setting up the Zebra interface to DPDK port mapping. This workflow is the same as with the DPDK dataplane plugin in the previous section.

root@dpu-arm:~# vtysh -c "show dplane doca port" 
Total ports: 6 cores: 8 
Port Device           IfName           IfIndex          sw,domain,port 
0    0000:03:00.0     p0               4                0000:03:00.0,0,65535 
1    0000:03:00.0     pf0hpf           6                0000:03:00.0,0,4095 
2    0000:03:00.0     pf0vf0           15               0000:03:00.0,0,4096 
3    0000:03:00.0     pf0vf1           16               0000:03:00.0,0,4097 
4    0000:03:00.1     p1               5                0000:03:00.1,1,65535 
5    0000:03:00.1     pf1hpf           7                0000:03:00.1,1,20479 
root@dpu-arm:~#

DOCA flow initialization 

To use doca-flow for programming PBR rules, I had to initialize the doca-flow and doca-flow-port databases. This initialization was done after the hardware was initialized using rte_eal_init.

I used doca_flow_init for initializing the doca-flow library with the flow and queue count config.

struct doca_flow_cfg flow_cfg; 

memset(&flow_cfg, 0, sizeof(flow_cfg)); 
flow_cfg.total_sessions = ZD_DOCA_FLOW_MAX; 
flow_cfg.queues = doca_ctx->nb_cores;  

doca_flow_init(&flow_cfg, &err); 

As I used DPDK to set up the hardware ports, I had to install them in the doca-flow-port database with dpdk_port-id.

struct doca_flow_port_cfg port_cfg; 

 memset(&port_cfg, 0, sizeof(port_cfg)); 
port_cfg.port_id = dpdk_port_id; 
port_cfg.type = DOCA_FLOW_PORT_DPDK_BY_ID; 
snprintf(port_id_str, ZD_PORT_STR_MAX, "%u", port_cfg.port_id); 
port_cfg.devargs = port_id_str; 

doca_port = doca_flow_port_start(&port_cfg, &err);

Program PBR rule using doca-flow APIs 

DOCA flows are programmed with a series of data structures for the match, action, forward, and monitor attributes.

struct doca_flow_match match, match_mask; 
struct doca_flow_actions actions; 
struct doca_flow_fwd fwd; 
struct doca_flow_monitor monitor;

Flow match 

This is specified as a match and match-mask. Match-mask is optional and is auto-filled by the doca-flow library if not specified.

memset(&match, 0, sizeof(match)); 
memset(&match_mask, 0, sizeof(match_mask));  

match.out_src_ip.type = DOCA_FLOW_IP4_ADDR; 
match.out_src_ip.ipv4_addr = src_ip; 
match_mask.out_src_ip.ipv4_addr = src_ip_mask; 

match.out_dst_ip.type = DOCA_FLOW_IP4_ADDR; 
match.out_dst_ip.ipv4_addr = dst_ip; 
match_mask.out_src_ip.ipv4_addr = dst_ip_mask; 
 
match.out_l4_type = ip_proto;  
 
match.out_src_port = RTE_BE16 (l4_src_port); 
match_mask.out_src_port =  UINT16_MAX; 

match.out_dst_port =  RTE_BE16 (l4_dst_port); 
match_mask.out_dst_port =  UINT16_MAX; 

I skipped populating fields such as eth or eth-mask. This is because the doca-flow library can auto-populate such fields to RTE_ETHER_TYPE_IPV4 or RTE_ETHER_TYPE_IPV6, based on other match fields, dst_ip or src_ip.

Flow actions 

To route the packet, I had to change the destination MAC address to the gateway (leaf2) MAC, decrement the TTL, and change the source MAC address. This was originally discussed in part 1, Developing Applications with NVIDIA BlueField DPU and DPDK

memset(&actions, 0, sizeof(actions)); 

 actions.dec_ttl = true; 
 memcpy(actions.mod_src_mac, uplink_mac, DOCA_ETHER_ADDR_LEN); 
 memcpy(actions.mod_dst_mac, gw_mac, DOCA_ETHER_ADDR_LEN); 

Flow forward 

Then, I set the output port to the uplink. 

memset(&fwd, 0, sizeof(fwd)); 
 
fwd.type = DOCA_FLOW_FWD_PORT; 
fwd.port_id = out_port_id; 

Flow monitoring 

I set up flow counters for troubleshooting.

memset(&monitor, 0, sizeof(monitor));  

monitor.flags |= DOCA_FLOW_MONITOR_COUNT; 

DOCA flow pipes and entries 

Flow creation is a two-step process:

  1. Create a flow pipe.
  2. Add a flow entry to the flow pipe. 

The first step creates a software template for a lookup stage. The second step uses the template to program the flow in the hardware. 

Pipes are useful when you must program many similar flows. For such a case, you can set up a single match template (pipe) and indicate which match-field must be updated at the time of flow entry creation (for example, a layer 4 destination port). Subsequent flow entries need only populate the match fields that vary from the pipe (the layer 4 destination port). 

In the case of PBR, each flow pattern is unique, so I created a separate pipe and entry for each PBR rule using the flow attributes that I already populated. 

struct doca_flow_pipe_cfg pipe_cfg;  

pipe_cfg.name = "pbr"; 
pipe_cfg.port = in_dport->doca_port; 
pipe_cfg.match = &match; 
pipe_cfg.match_mask = &match_mask; 
pipe_cfg.actions = &actions; 
pipe_cfg.monitor = &monitor; 
pipe_cfg.is_root = true; 

flow_pipe = doca_flow_create_pipe(&pipe_cfg, &fwd, NULL, &err); 
flow_entry = doca_flow_pipe_add_entry(0, flow_pipe, &match, &actions, &monitor, &fwd, &err);

Flow deletion 

The flow pipe and entry creation APIs return pipe and flow pointers that must be cached for subsequent deletion. 

doca_flow_pipe_rm_entry(0, flow_entry); 
doca_flow_destroy_pipe(port_id, flow_pipe); 

Flow statistics 

At the time of flow creation, I set the DOCA_FLOW_MONITOR_COUNT flag. I queried the flow stats using doca_flow_query.

struct doca_flow_query query

// hit counters – query.total_pkts and query.total_bytes 
memset(&query, 0, sizeof(query)); 
doca_flow_query(flow_entry,  &query); 

Verifying hardware acceleration 

The FRR-PBR rule configuration and traffic generation is the same as with dpdk-plugin. The traffic is forwarded by the DPU hardware as expected and can be verified using the flow counters.

root@dpu-arm:~# vtysh -c "show dplane doca pbr flow" 
Rules if pf0vf0 
  Seq 1 pri 300 
  SRC IP Match: 172.20.0.8/32 
  DST IP Match: 172.30.0.8/32 
  IP protocol Match: 17 
  DST Port Match: 53 
  Tableid: 10000 
  Action: nh: 192.168.20.250 intf: p0 
  Action: mac: 00:00:5e:00:01:fa 
  DOCA flow: installed 0xffff28005150 
  DOCA stats: packets 202 bytes 24644 
root@dpu-arm:~# 

It can also be verified using hardware entries:

root@dpu-arm:~# ~/mlx_steering_dump/mlx_steering_dump_parser.py -p `pidof zebra`  - 
f /tmp/dpdkDump 
domain 0xe294002, table 0xaaab07648b10, matcher 0xffff28012c30, rule 0xffff28014040 
   match: outer_l3_type: 0x1, outer_ip_dst_addr: 172.30.0.8, outer_l4_type: 0x2, metadata_reg_c_0: 0x00030000, outer_l4_dport: 0x0035, outer_ip_src_addr: 172.20.0.8 
   action: MODIFY_HDR(hdr(dec_ip4_ttl)), rewrite index 0x0 & VPORT, num 0xffff & CTR(hits(352), bytes(42944)), index 0x806200

FRR now has a second dataplane plugin for hardware acceleration of PBR rules, using doca-flow.

Application development takeaways 

In this series, you saw how a DPU networking application can be hardware-accelerated with four steps using rte_flow or doca_flow:

  • Link the DOCA/DPDK libraries to the application. 
  • Initialize the hardware.
  • Setup the application to hardware port mapping.
  • Program flows for steering the traffic.

As more elements are offloaded on the DPU, the development process can get complex with increasing source lines of code (SLOC). That’s where DOCA abstractions help: 

  • DOCA comes with several built-in libraries such as doca-dpi, gRPC, Firefly time synchronization, and more. These libraries enable quick plug-n-play for your application.
  • DOCA constructs such as doca_pipe enable you to templatize your pipeline, eliminating boilerplate code and optimizing flow insertion.
  • Upcoming DOCA libraries, such as the hardware-accelerated LPM (Longest prefix match) make building switch pipelines easier. This is particularly relevant to the sample application that you saw in this series, FRR, which is commonly deployed for building a LPM routing table (or RIB) with BGP.
  • With DOCA, you can also leapfrog into the exciting world of GPU + DPU development on the converged accelerators.
Showcasing the BlueField-2 data processing unit as a converged accelerator
Figure 1. Converged Accelerator

Are you ready to take your application development to dizzying heights? Sign up for the DOCA Early Access developer program to start building today.

For more information, see the following resources: 

Categories
Misc

Running into an issue with training my image segmentation model

Hey all, first time poster, and have not been active due to my projects, I am very new to deep learning and ML, I have trained some semantic segmentation models in the past, but not a whole lot. I am just asking for some help with an error that I keep getting, it is a no gradient provided for any variable error, I am doing some semantic segmentation of brain data, its taking brain data and categorizing (onehot) into 4 specific categories, this data was filtered for the specific categories needed. Below is my main code, I believe it is due to the one hot encoding method from keras but if anyone has ever dealt with something similar, some tips would be really helpful.

Thanks all, and have a great day, here is the code:

imgs_list_train = glob.glob(“D:/BMEN/Data/Test/BrainSet2/*.npy”)
masks_list_train = glob.glob(“D:/BMEN/Data/Test/BrainSet2/*.npy”)

batch_size = 32
gen_train = DataEncoder(imgs_list_train, masks_list_train, batch_size = batch_size)

model_name = “C:/Users/Keilan Pieper/Desktop/BMEN/ImageSegmentation/Models/BrainStructure”
early_stop = tf.keras.callbacks.EarlyStopping(monitor= ‘val_loss’, patience = 10)

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate = 1e-4))

history = model.fit(gen_train, epochs=1, verbose = 1,
callbacks= [early_stop, monitor, lr_schedule])

Data encoder class

class DataEncoder(keras.utils.Sequence):
‘Generates data for Keras’
def __init__(self, imgs_list, masks_list, patch_size=(128, 128), batch_size=32, shuffle=True):

self.imgs_list = imgs_list
self.masks_list = masks_list
self.patch_size = patch_size
self.batch_size = batch_size
self.nsamples = len(imgs_list)
self.shuffle = True
self.on_epoch_end()

def __len__(self):
‘Denotes the number of batches per epoch’
return len(self.imgs_list) // self.batch_size

def __getitem__(self, index):
‘Generate one batch of data’
# Generate indexes of the batch
batch_indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size]

# Generate data
X, Y = self.__data_generation(batch_indexes)

return X, Y

def on_epoch_end(self):
‘Updates indexes after each epoch’
self.indexes = np.arange(self.nsamples)
if self.shuffle == True:
np.random.shuffle(self.indexes)

def __data_generation(self, batch_indexes):
‘Generates data containing batch_size samples’
# Initialization
X = np.empty((self.batch_size, self.patch_size[0], self.patch_size[1], 1))
Y = np.empty((self.batch_size, self.patch_size[0], self.patch_size[1], 1))

for (jj, ii) in enumerate(batch_indexes):
aux_img = np.load(self.imgs_list[ii])
aux_mask = np.load(self.masks_list[ii])

# Implement data augmentation function
aux_img_patch, aux_mask_patch = self.__extract_patch(aux_img, aux_mask)

X[jj, :, :, 0] = aux_img_patch
tf.one_hot(Y, 4)
Y[jj, :, :, 0] = aux_mask_patch

return X, Y

def __extract_patch(self, img, mask):
crop_idx = [None] * 2
crop_idx[0] = np.random.randint(0, img.shape[0] – self.patch_size[0])
crop_idx[1] = np.random.randint(0, img.shape[1] – self.patch_size[1])
img_cropped = img[crop_idx[0]:crop_idx[0] + self.patch_size[0],
crop_idx[1]:crop_idx[1] + self.patch_size[1]]
mask_cropped = mask[crop_idx[0]:crop_idx[0] + self.patch_size[0],
crop_idx[1]:crop_idx[1] + self.patch_size[1]]

return img_cropped, mask_cropped

submitted by /u/Independent-Dust7072
[visit reddit] [comments]

Categories
Misc

Developer Meetup: Learn How Metropolis Boosts Go-to-Market Efforts​

The new expanded NVIDIA Metropolis program offers access to the world’s best development tools and services to reduce time and cost of managing your vision AI deployments.

The newly expanded NVIDIA Metropolis program offers you access to the world’s best development tools and services to reduce the time and cost of managing your vision-AI deployments. Join this developer meetup (dates and times below) with NVIDIA experts to learn five ways the NVIDIA Metropolis program will grow your vision AI business and enhance your go-to-market efforts​.

In this meetup, you will learn how:

  • Metropolis Validation Labs optimize your applications and accelerate deployments.
  • NVIDIA Fleet Command simplifies provisioning and management of edge deployments accelerating the time to scale from POC to production.
  • NVIDIA LaunchPad provides easy access to GPU instances for faster POCs and customer trials.
  • Partners around the world are achieving success through this program.

Additionally, you will hear from elite partner Milestone Systems, who will share how NVIDIA Metropolis is boosting its AI software development, integration, and business development efforts.

Get Metropolis Certified to gain access to the NVIDIA Software stack, GPU servers, and marketing promotions worth over $100,000 in value. 

Select one of the following sessions in the region most convenient to you (Feb. 16 and 17): 

  • NALA: Feb. 16 at 1 pm, PT
  • APAC/JAPAN: Feb. 17 at 1 pm JST/KST
  • EMEA: Feb. 17 at 4 pm, CET

Register Now.

Categories
Misc

Renovations to Stream About: Taiwan Studio Showcases Architectural Designs Using Extended Reality

Interior renovations have never looked this good. TCImage, a studio based in Taipei, is showcasing compelling landscape and architecture designs by creating realistic 3D graphics and presenting them in virtual, augmented, and mixed reality — collectively known as extended reality, or XR. For clients to get a better understanding of the designs, TCImage produces high-quality, Read article >

The post Renovations to Stream About: Taiwan Studio Showcases Architectural Designs Using Extended Reality appeared first on The Official NVIDIA Blog.

Categories
Misc

Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways

Preventable train accidents like the 1985 disaster outside Tel Aviv in which a train collided with a school bus, killing 19 students and several adults, motivated Shahar Hania and Elen Katz to help save lives with technology. They founded Rail Vision, an Israeli startup that creates obstacle-detection and classification systems for the global railway industry Read article >

The post Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways appeared first on The Official NVIDIA Blog.

Categories
Misc

High-precision Image Editing with AI: EditGAN

EditGAN uses AI to edit specific areas of images based off of user input while maintaining the image quality.EditGAN takes AI-driven image editing to the next level by providing high levels of accuracy while not sacrificing image quality.EditGAN uses AI to edit specific areas of images based off of user input while maintaining the image quality.

The desire to edit photos of cats, cars, or even antique paintings, has never been more accessible thanks to a generative adversarial network (GAN) model called EditGAN. The work—from NVIDIA, the University of Toronto, and MIT researchers—builds off DatasetGAN, an Artificial Intelligence vision model that can be trained with as few as 16 human-annotated images and performs as effectively as other methods that require 100x more images. EditGAN takes the power of the previous model and empowers the user to edit or manipulate the desired image with simple commands, such as drawing, without compromising the original image quality.

What is EditGAN?

According to the paper: “EditGAN is the first GAN-driven image-editing framework, which simultaneously offers very high-precision editing, requires very little annotated training data (and does not rely on external classifiers), can be run interactively in real time, allows for straightforward compositionality of multiple edits, and works on real embedded, GAN-generated, and even out-of-domain images.”

The model learns a specific number of editing vectors, which can be applied to an image interactively. Essentially, it forms an intuitive understanding of the images and their content, which can then be leveraged by users for specific modifications and editing. The model learns from similar images and recognizes different components and specific parts of the objects inside the images. A user can utilize this for targeted modifications of the different subparts or for editing within specific areas. Because of how precise the model is, the image is not distorted outside of the parameters set by the user. 

Using a generative adversarial network (GAN), users can edit an image, such as this image of a car or cat, and with simple inputs EditGAN will render the image to the user’s desired output.
Figure 1. EditGAN in action, the AI trained in the model allows the user to make, sometimes dramatic, changes to the original image.

“The framework allows us to learn an arbitrary number of editing vectors, which can then be directly applied on other images at interactive rates.” The researchers explained in their study. “We experimentally show that EditGAN can manipulate images with an unprecedented level of detail and freedom while preserving full image quality. We can also easily combine multiple edits and perform plausible edits beyond EditGAN’s training data. We demonstrate EditGAN on a wide variety of image types and quantitatively outperform several previous editing methods on standard editing benchmark tasks.” 

From adding smiles, changing the direction someone is looking, creating a new hairstyle, or giving a car a nicer set of wheels, the researchers show just how intrinsic the model can be with minimal data annotation. The user can draw a simple sketch or mask corresponding to the desired editing and guides the AI model to realize the modification, such as bigger cat ears or cooler headlights on a car. The AI then renders the image while maintaining a very high level of accuracy and maintaining the quality of the original image. Afterwards, the same edit can be applied to other images in real-time.

An example of EditGAN assigning each pixel of this car to a specific classification on the car. The AI can be so precise to recognize whether a specific pixel belongs to the door handle or the fender.
Figure 2. An example of the pixels assigned to different parts of the image. The AI recognizes the different areas and can make edits based on human input.

How does this GAN work?

EditGAN assigns each pixel of the image to a category, such as a tire, windshield, or car frame. These pixels are controlled within the AI latent space and based on the input of the user, who can easily and flexibly edit those categories. EditGAN manipulates only those pixels associated with the desired change. The AI knows what each pixel represents based on other images used in training the model, so you could not attempt to add cat ears to a car with accurate results. But when used within the correct model, EditGAN is a phenomenal tool that provides exceptional image editing results.

The AI recognizes the face of a cat, bird, or aspects of a room. This is because the images used in the training model show the AI what part of the cat, or bird, is what.
Figure 3. EditGAN can train on a variety of classes of imagery, from animals to environments, forming a detailed understanding of their content. 

EditGAN’s potential

AI-driven photo and image editing have the potential to streamline the workflow of photographers and content creators and to enable new levels of creativity and digital artistry. EditGAN also enables novice photographers and editors to produce high-quality content, along with the occasional viral meme.  

“This AI may transform how we edit photos and perhaps eventually video. It allows someone to take an image and alter it by using simple text commands. If you have a photo of a car and you want to make the wheels bigger, just type “make wheels bigger,” and poof!—there’s a completely photorealistic picture of the same car with bigger wheels.” – Fortune magazine

EditGAN may also be used for other important applications in the future. For example, EditGAN’s editing capabilities could be utilized to create large image datasets with certain characteristics. Such specific datasets can be useful when training downstream machine-learning models on different computer vision tasks.

Furthermore, the EditGAN framework may impact the development of future generations of GANs. While the present version of EditGAN focuses on image editing, similar methods could potentially be used to edit 3D shapes and objects, which would be useful when creating virtual 3D content for games, movies, or the metaverse.

To read more about this amazing methodology, check out their paper.

NVIDIA is always on the cutting edge of technology, check out NVIDIA Research for more innovative research.

Categories
Misc

I want to get started with Tensorflow, does anybody recommend certain free courses I could take?

submitted by /u/Confident-Penalty111
[visit reddit] [comments]

Categories
Misc

Split inputdata into seperate LSTM layers

I have 10k+ documents which belong either to class 0 or 1. I want to train a model by feeding it each sentence in a document through a textvectorize layer ➡️ embedding layer ➡️ lstm layer Then concat all lstm layers for each sentence feed them to 1 dense layer with outputsize of 128 ➡️ dense layer outputsize of 1 with softmax as activation.

Number of sentences in each document is somewhere between 0 and 5000. Each sentence has between 1-5 words.

I have managed to create a model for predicting which class a sentence most likely belong to. But i want to basically extend it to take all sentences from an exampledocument for classifying entire document. Each sentence is not related to the other.

How do i go about this?

submitted by /u/jacobkodar
[visit reddit] [comments]

Categories
Misc

Error while running model train code

Hello everyone.

While trying to run the training code for my model, the following error happens:

Traceback (most recent call last):

File “C:UsersDiogo AlpendreOneDrive – IPLeiriaPara arrumarDocumentosGitHubT22_AD_Detectiondatasetsfstudent_datasetfstudent_dataset.py”, line 6, in <module>

import tensorflow_datasets as tfds

File “C:UsersDiogo AlpendreAppDataLocalProgramsPythonPython39libsite-packagestensorflow_datasets__init__.py”, line 64, in <module>

from tensorflow_datasets import vision_language

File “C:UsersDiogo AlpendreAppDataLocalProgramsPythonPython39libsite-packagestensorflow_datasetsvision_language__init__.py”, line 20, in <module>

from tensorflow_datasets.vision_language.wit import Wit

File “C:UsersDiogo AlpendreAppDataLocalProgramsPythonPython39libsite-packagestensorflow_datasetsvision_languagewit__init__.py”, line 18, in <module>

from tensorflow_datasets.vision_language.wit.wit import Wit

File “C:UsersDiogo AlpendreAppDataLocalProgramsPythonPython39libsite-packagestensorflow_datasetsvision_languagewitwit.py“, line 25, in <module>

csv.field_size_limit(sys.maxsize)

OverflowError: Python int too large to convert to C long

How do I fix it?

Thanks for all the help

submitted by /u/dalpendre
[visit reddit] [comments]

Categories
Misc

Is building tensorflow from source supposed to take 8+ hours?

Initially I installed it using pip but my CPU does not have AVX support. A solution to this that I found online is that if you build tensorflow from source, it will somehow work around this. However I started the compilation this morning at 10am… now it’s 6PM and it’s still going… my computer is literally just a repurposed office machine so it is not the most powerful but this seems way too long, is this normal?

Some of my computer’s specs:

CPU: Intel Pentium Silver J5040 3.2 GHz

GPU: Intel GeminiLake [UHD Graphics 605]

8GB RAM

submitted by /u/ComprehensiveWarfare
[visit reddit] [comments]