Categories
Misc

MONAI Leaps Forward with AutoML-Powered Model Development and Cloud-Native Deployments

Graphic showing logos of MONAI Application Packages + HELMProject MONAI continues to expand its end-to-end workflow with new releases and a new component called MONAI Deploy Inference Service.
Graphic showing logos of MONAI Application Packages + HELM

Project MONAI continues to expand its end-to-end workflow with new releases and a new subproject called MONAI Deploy Inference Service.

Project MONAI is releasing three new updates to existing frameworks, MONAI v0.8, MONAI Label v0.3, and MONAI Deploy App SDK v0.2. It’s also expanding its MONAI Deploy subsystem with the MONAI Deploy Inference Service (MIS), a server that runs MONAI Application Packages (MAPs) in a Kubernetes Cluster as cloud-native microservices.

MIS helps expand the end-to-end capabilities of MONAI by integrating with a container orchestration system like Kubernetes. By using the Kubernetes framework, developers can quickly start testing their models. This allows moving the execution from local development to staging environments.

More information:

MONAI Core v0.8

MONAI Core v0.8 focuses on expanding its learning capabilities by both adding Self-Supervised and Multi-Instance learning support.  

Also included is a new state-of-the-art differential search framework called DiNTS that helps accelerate Neural Architecture Search (NAS) for large-scale 3D image sets like those found in medical imaging.

Highlights include:

  • Multi-instance learning with examples for the MSD dataset.
  • Visualization of transforms and notebook with approaches for 3D image transform augmentation.
  • Self-supervised learning with pretraining pipeline-leveraging vision transformer  tutorials, highlighting training with unlabeled data and adaptation for downstream tasks. 
  • DiNTS AutoML with examples using MSD tasks.

Get started with the new features using the included Jupyter Notebooks:

MONAI Label v0.3

MONAI Label v0.3 focuses on including multilabel segmentation support with DynUNet and UNETR networks as the base architecture options. It also focuses on enhanced performance with multi-GPU training support to improve scalability and usability improvements that make active learning easier to use.

Highlights include:

  • Multi-Label Segmentation Support
  • Multi-GPU Training
  • Active Learning UX Changes

MONAI Deploy 

MONAI Deploy App SDK v0.2

MONAI Deploy App SDK v0.2 continues to expand its base operators, including support for additional DICOM operations.

Highlights include:

  • Operator for DICOM Series Selection.
  • Operator for exporting DICOM Structured Reports SOP for classification results.

MONAI Deploy Inference Service v0.1

MONAI Deploy Inference Service v0.1 is the first component of the MONAI Deploy Application Server that continues to expand on the end-to-end workflow of MONAI.  It includes the ability to deploy MONAI Application Packages (MAPs) created by MONAI Deploy App SDK into a Kubernetes cluster.

Highlights include:

  • Register a MAP in the Helm Charts of MIS.
  • Upload inputs through a REST API request and make them available to the MAP container.
  • Provision resources for the MAP container.
  • Provide outputs of the MAP container to the client who made the request.

Check out the new MONAI Deploy tutorials that walk you through creating a MAP using App SDK, deploying the MIS Service, and pushing your MAP to MIS to be run as a cloud-native microservice.

You can find more in-depth information about each release under their respective projects in the Project MONAI GitHub.

Categories
Misc

Programming Distributed Multi-GPU Tensor Operations with cuTENSOR v1.4

NVIDIA cuTENSOR, version 1.4, library supports 64-dimensional tensors, distributed multi-GPU tensor operations, and improves tensor contraction performance models.

Today, NVIDIA is announcing the availability of cuTENSOR, version 1.4, which supports up to 64-dimensional tensors, distributed multi-GPU tensor operations, and helps improve tensor contraction performance models. This software can be downloaded now free of charge.

Download the cuTENSOR software.

What’s New?

  • Supports up to 64-dimensional tensors.
  • Supports distributed, multi-GPU tensor operations.
  • Improved tensor contraction performance model (i.e., algo CUTENSOR_ALGO_DEFAULT).
  • Improved performance for tensor contraction that have an overall large contracted dimension (i.e., a parallel reduction was added).
  • Improved performance for tensor contraction that have a tiny contracted dimension (
  • Improved performance for outer-product-like tensor contractions (e.g., C[a,b,c,d] = A[b,d] * B[a,c]).
  • Additional bug fixes.

For more information, see the cuTENSOR Release Notes.

About cuTENSOR

cuTENSOR is a high-performance CUDA library for tensor primitives; its key features include:

  • Extensive mixed-precision support:
    • FP64 inputs with FP32 compute.
    • FP32 inputs with FP16, BF16, or TF32 compute.
    • Complex-times-real operations.
    • Conjugate (without transpose) support.

Learn more

Recent Developer posts

Categories
Misc

Has anyone used the Tensorflow Lite Model Maker to make an object detection model for a Raspberry Pi? I am trying to make a model and in DESPERATE need of some help.

submitted by /u/Matthewdlr4
[visit reddit] [comments]

Categories
Misc

Tensorflow – Help Protect the Great Barrier Reef

Hi, everyone, hope you are doing well. I am new to Machine Learning and Tensorflow. I was wondering if anyone wants to team up or include me in your team. I would be very grateful. I want to work on a real-life project and this seems to be the best. Thank You.

submitted by /u/boringly_boring
[visit reddit] [comments]

Categories
Misc

Help with Tensorflow Lite

Is anyone here able to help me out make a tensorflow lite object detection model I can run on my pi? I have all of the training data collected and labeled just need help making the model.

I have tried a few things including the Tensorflow Lite Model Maker as well as doing it from scratch locally. Just need help making my model.

submitted by /u/Matthewdlr4
[visit reddit] [comments]

Categories
Misc

Noob Here! Can you answer something for me?

Afternoon!

I would like to create an app around community based image feedback. Is it possible to create a model around what the community rates your existing images & use it to tentatively give a new image a score before anyone votes on it? Can I also incorporate other factors in the image, such as distance between objects or color of items to further refine the model later on?

submitted by /u/programmrz
[visit reddit] [comments]

Categories
Misc

Different Outputs on Mac M1 and Windows

I am running a CNN code on jupyter notebook

Tensorflow is giving me a very good output on Windows, but on mac, the loss doesnt change at all.

Its literally the same code and i am trying to figure out why this is happening.

https://github.com/jeffheaton/t81_558_deep_learning/blob/master/install/tensorflow-install-mac-metal-jul-2021.ipynb

i followed this instruction for installing tensorflow on mac

Tensor Flow Version: 2.5.0 Keras Version: 2.5.0 Python 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:24:02) [Clang 11.1.0 ] Pandas 1.3.4 Scikit-Learn 1.0.1 GPU is available

These are my tensow flow details.

submitted by /u/viniltummala
[visit reddit] [comments]

Categories
Misc

How to incorporate a "black box" layer into a model? (Quantum Computing)

I’m trying to create a neural network where given the pure initial state of a quantum circuit (2D-vector), it spits out 2 numbers that would be essentially fed into a quantum computer to get results out. Currently, I have the step where I send a query to the quantum computer as a layer in the model. The quantum computer then will (in this case) spit out two numbers which I’d compare to the theoretical values.

As I was trying to implement this using the functional API, my custom layer was yelling at my because it did not like me using tf.unstack when it tried to build it by passing through a variable tensor. While this would probably be somewhat simple to fix, I ‘m concerned about how Qiskit would react to the values from the variable tensor or if it would work at all. Are there any workarounds or already implemented functions to achieve this?

Code:

Data Generation:

# TODO extend to complex amplitudes def generate_Hadamard_data(num_data): data = [] for _ in range(num_data): data_elem = preprocessing.normalize(np.random.rand(1, 2)).tolist()[0] data.append(data_elem) inv_sq2 = 1/np.sqrt(2) hadamard = np.array([[inv_sq2, inv_sq2], [inv_sq2, -1*inv_sq2]]) output = [np.matmul(hadamard, state).tolist() for state in data] data_ten = tf.constant(np.array(data).T, dtype=tf.float32) output_ten = tf.constant(np.array(output).T, dtype=tf.float32) print(output_ten) return tf.data.Dataset.from_tensor_slices(data_ten), tf.data.Dataset.from_tensor_slices(output_ten) 

Custom Layer:

class HadamardCircuitLayer(tf.keras.layers.Layer): ''' Takes in the initial data (state) and output (pulse params) from the neural network and runs it on the backend ibmq_armonk ''' def __init__(self, initial_data, shots=1024): super(HadamardCircuitLayer, self).__init__() # Unpack data data_iterator = initial_data.as_numpy_iterator() self.zero_coeffs = data_iterator.next() self.one_coeffs = data_iterator.next() self.shots = shots def build(self, shape): pass def call(self, output): custom_gate = Gate('custom_gate', 1, []) provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main') backend = provider.get_backend('ibmq_armonk') post_q_list = [] for a, b, pulse_params in zip(self.zero_coeffs, self.one_coeffs, tf.unstack(output)): norm = np.sqrt(a**2 + b**2) qc = QuantumCircuit(1, 1) qc.initialize([a/norm, b/norm], 0) qc.append(custom_gate, [0]) qc.measure(0, 0) pul = pulse_params.numpy() with pulse.build(backend, name='custom') as my_schedule: pulse.play(Gaussian(duration=64, amp=pul[0], sigma=np.e**pul[1]), pulse.drive_channel(0)) qc.add_calibration(custom_gate, [0], my_schedule) qc = transpile(qc, backend) job = execute(qc, backend=backend, shots=self.shots) counts = job.result().get_counts() post_q_list.append(np.array([counts["0"]/self.shots, counts["1"]/self.shots])) output_qten = tf.constant(post_q_list, dtype=tf.float32) print(output_qten) return output_qten 

Implementation:

# I haven't gotten far since running into the unstack error x_train, y_train = generate_Hadamard_data(1) dataset = tf.data.Dataset.zip((x_train, y_train)) inputs = tf.keras.Input(shape=(2,)) x = tf.keras.layers.Dense(128, activation="linear")(inputs) x = tf.keras.layers.Dropout(0.2)(x) x = tf.keras.layers.Dense(128, activation="linear")(x) x = tf.keras.layers.Dense(2, activation="linear")(x) outputs = HadamardCircuitLayer(x_train)(x) model = tf.keras.Model(inputs=inputs, outputs=outputs, name="microwave_pulse_model") 

submitted by /u/soravoid
[visit reddit] [comments]

Categories
Misc

Detecting blinks and Eye direction for switch input for disabilities – help needed! We have a model in python that needs migrating into the node app. Currently detecting blinks using mediapipe. Helping locked in patients speak.

Detecting blinks and Eye direction for switch input for disabilities - help needed! We have a model in python that needs migrating into the node app. Currently detecting blinks using mediapipe. Helping locked in patients speak. submitted by /u/squarepushercheese
[visit reddit] [comments]
Categories
Misc

How to learn tensorflow as a noob?

I am a mobile app developer. I have been working in IT for past 5 years. I took a Udemy course on TensforFlow from Zero to mastery. I thought as I have decent knowledge of software development I might pick up TensorFlow pretty quickly without really knowing the basics of machine learning but I was so wrong as I am having tough time understanding Tensorflow from the course. Everyone keep saying learn linear algebra, pandas, keras , scikit learn etc and a bunch of stuff. This is too much for me. For now I just want to learn how to create an ML model with a given data(Data can be anything image, text etc) and use that model in my web and mobile apps. I know there is something call TensorFlow lite which I can use in my apps directly but what is the bare minimum requirement I need to know before I start learning TensorFlow so I can easily pick it up later.

Also the the Udemy course which I took seems to be pretty good and lot of people seems to be liking it so I don’t think it really is the instructor’s fault as I don’t have my basics clear.

If anyone has any udemy courses that they can point me to would be great. I am looking more on practical approach and not jus theory boring stuff

submitted by /u/BraveEvidence
[visit reddit] [comments]