Categories
Misc

Omniverse User Group Spotlights Talented Community Members

A dirty 80's style teenager's room with video games strewn about. Graphic created using Omniverse.At NVIDIA GTC, the Omniverse User Group held its 2nd meeting, focusing on developers and users of the NVIDIA open platform for collaboration and simulation. A dirty 80's style teenager's room with video games strewn about. Graphic created using Omniverse.

Capping off a week of major announcements including the NVIDIA Omniverse Avatar, and Earth-2 Supercomputer at NVIDIA GTC last week, the community team hosted the second Omniverse User Group. 

Excited participants logged in from across the globe to hear about the future of the platform from the NVIDIA Omniverse leadership team. Participants also got a sneak peek of upcoming features and releases through presentations from partners and community members showcasing their inspiring work. 

The event culminated with an announcement of the latest contest winners, along with the first Ambassador and Omniverse Machinima expert, Pekka Varis from Catchline. Varis won the title of ambassador by helping and sharing his great knowledge of the platform with others on the forums and Discord server.

Afterward, the party migrated to the official Discord server, where the community had a blast chatting, answering questions, and learning about what excited users the most about the future of the Omniverse. 

Highlights include:

Watch the second NVIDIA Omniverse User Group

A tiled graphic of 6 headshots of user group members at the meeting.
Figure 1. NVIDIA Omniverse User Group members.

Share your work

As livestream cohost and Omniverse Community Manager, Wendy Gram, often says, “the community’s amazing work in the Omniverse inspires us every single day.” 

If you are interested in presenting to the community at a User Group meeting, in a post, or on our weekly livestream, reach out through Discord (Prof E#2041) or e-mail

We also invite you to share your work. Tag us on social media using the #NVIDIAOmniverse, or submit to the Omniverse Gallery

We look forward to seeing you in the Omniverse or at our next events. Please follow us for the latest updates.

Connect with us:




Categories
Misc

A GFN Thursday Deal: Get ‘Crysis Remastered’ Free With Any Six-Month GeForce NOW Membership

You’ve reached your weekly gaming checkpoint. Welcome to a positively packed GFN Thursday. This week delivers a sweet deal for gamers ready to upgrade their PC gaming from the cloud. With any new, paid six-month Priority or GeForce NOW RTX 3080 subscription, members will receive Crysis Remastered for free for a limited time. Gamers and Read article >

The post A GFN Thursday Deal: Get ‘Crysis Remastered’ Free With Any Six-Month GeForce NOW Membership appeared first on The Official NVIDIA Blog.

Categories
Misc

Keras for R is back!

For a while, it may have seemed that Keras for R was in some undecidable state, like Schrödinger’s cat before inspection. It is high time to correct that impression. Keras for R is back, with two recent releases adding powerful capabilities that considerably lighten previously tedious tasks. This post provides a high-level overview. Future posts will go into more detail on some of the most helpful new features, as well as dive into the powerful low-level enhancements that make the former possible.

Categories
Misc

Get 50% Off Upcoming Hands-On Training from NVIDIA

woman at a laptopRegister now for instructor-led workshops from the NVIDIA Deep Learning Institute. woman at a laptop

Get hands-on training in AI, deep learning, accelerated computing, and data science with the NVIDIA Deep Learning Institute (DLI). DLI offers self-paced, online courses as well as instructor-led online workshops. Whether you are a developer, data scientist, professor, or student, there is a course for you within DLI. Learners who complete the courses and workshops also can earn an NVIDIA DLI certificate to demonstrate subject-matter competency and support career growth. 

Full-day workshops offer a comprehensive learning experience that includes hands-on exercises and guidance from expert instructors certified by DLI. 

Receive half-off registration for the following workshops: 

Fundamentals of Accelerated Computing with CUDA C/C++

Learn how to accelerate and optimize existing C/C++ CPU-only applications to leverage the power of GPUs using the most essential CUDA techniques and the Nsight™ Systems profiler.

  • Thursday, Nov. 18, 7:00 a.m. – 3:00 p.m. PST
  • Use code DLISC21 to receive half-off full-price registration.

Building Transformer-Based Natural Language Processing Applications

Learn how to use transformer-based natural language processing models for text classification tasks, such as categorizing documents. You’ll also get insight on how to use transformer-based models for named-entity recognition (NER) tasks and more.

  • Monday, Dec. 6, 9:00 a.m. – 5:00 p.m. CET
  • Use code DLIGTC50 to receive half-off full-price registration

Applications of AI for Predictive Maintenance

Learn how to identify anomalies and failures in time-series data, estimate the remaining useful life of the corresponding parts, and use this information to map anomalies to failure conditions.

  • Tuesday, Dec. 14, 9:00 a.m. – 5:00 p.m. CET
  • Use code DLIGTC50 to receive half-off full-price registration

Take advantage of the discounted codes. Space is limited, register now. >>

Visit the DLI website for details on each course and the full schedule of upcoming instructor-led workshops, which is regularly updated with new training opportunities. Also, check out our catalog of self-paced online courses.

Categories
Misc

AI Pioneers Write So Should Data Scientists

Data Scientists role in producing AI related written content to be consumed by the public

Editor’s Note: If you’re interested sharing your data science and AI expertise, you can apply to write for our blog here.

Primarily the dual purpose of writing has been to preserve and transfer knowledge across communities, organizations, and so on. Writing within the machine-learning domain is used for the sole purposes mentioned. There are prominent individuals that have placed immense time and effort in advancing the frontier of machine learning and AI as a field. Coincidentally, a good number of these AI pioneers and experts write a lot.

This article includes individuals that have contributed to the wider field of AI in different shapes and forms, especially, emphasizing their written work. The contribution each individual has provided through the practice of writing AI-related content.

The essential takeaway from this article is that as Data Scientists it’s a requirement that we develop soft skills such as creative and critical thinking, alongside communication. Writing is an activity that cultivates the critical soft skills for Data Scientists.

AI Experts That Write

Andrej Karpathy

At the time of writing, Andrej Karpathy is Senior Director of AI at Tesla. Overseeing engineering and research efforts on bringing commercial autonomous vehicles to market, by using massive artificial neural networks trained with millions of image and video samples.

Andrej is a prominent writer. His work has featured in top publications such as Forbes, MIT Technology Review, Fast Company, and Business Insider. Specifically, I’ve been following Andrej’s writing through his Medium profile and his blog.

In my time as a Computer Vision student exploring the fundamentals of convolutional neural networks, Andrej’s Deep learning course at Stanford proved instrumental in gaining an understanding and intuition of the internal structure of a convolutional neural network. Specifically, the written content of the course explored details such as the distribution of parameters across the CNN, the operations of the different layers within the CNN architecture and the convolution operation that occurs between CNN’s filter parameters, and the values of an input image. Andrej uses his writing to present new ideas, explore the state of deep learning, and educate others.

Data Scientists are intermediaries between the world of numerical representations of data and project stakeholders, therefore the ability to interpret and convey derived understanding from datasets is essential to Data Scientists. Writing is one means of communication that equips Data scientists with the capability to convey and present ideas, patterns, and learning from data. Andrej’s writing is a clear example of how this is done. He provides a clear and concise-written explanation of neural network architecture, data preparation processes and many more.

Kai-Fu Lee

Kai-Fu Lee is an AI and Data Science Expert. He has contributed significantly to AI through his work at Google, Microsoft, Apple, and other organizations.

He’s currently CEO of Sinovation Ventures. Kai-Fu has been making significant contributions to AI research by applying Artificial Intelligence in video analysis, computer vision, pattern recognition, and so on. Furthermore, Kai-Fu Lee has written books exploring the global players of AI and the future utilization and impact of AI in his book AI Superpowers and AI 2041.

Through his writing, Kai-Fu Lee dissects the strategies of nations and entities that operate abundantly within the AI domain. The communication of decisions, mindset, and national efforts that drive the AI superpowers of today is crucial to the developing nations seeking to fast-track the development of AI technologies.

However, Kai-Fu Lee also conveys the potential disadvantages that the advancement of AI technologies can have on societies and individuals through his writing as well. By reading Kai-Fu Lee’s written contents, I’ve been able to understand how deep learning and predictive models can affect daily human lives when their usability is projected into imaginative future scenarios that touch on societal issues such as bias, poverty, discrimination, inequality, and so on.

The “dangers of AI” is a discourse that’s held more frequently as the adoption of AI technology and data-fueled algorithms become commonplace within our mobile devices, appliances, and processes. Data Scientists are ushering in the future one model at a time and it’s our responsibility to ensure that. We are able to communicate the fact that we conduct an in-depth cost-benefit analysis of technologies before they are integrated into society. These considerations put the mind of consumers at ease, by ensuring that the positive and negative impact of AI technology is not just afterthoughts to Data Scientists. 

An effective method of communicating the previously mentioned considerations for Data Scientists is through writing. There’s effectiveness in writing a post or two explaining the data source, network architectures, algorithms, and extrapolated future utilization of AI applications or predictive models based on current utilization. A Data Scientist that covers these steps as part of their process establishes a sense of accountability and trust within product consumers and largely, the community.

Francois Chollet

TensorFlow and Keras are two primary libraries that are used extensively within data science and machine-learning projects. If you use any of these libraries, then Francois Chollet is probably an individual within AI you’ve come across.

Francois Chollet is an AI researcher that currently works as a Software Engineer at Google. He’s recognized as the creator of the deep-learning library Keras and also a prominent contributor to the TensorFlow library. And no surprise here, he writes.

Through his writing, Francois has expressed his thoughts on concerns, predictions, and limitations of AI. The impact of Francois writing on me as a machine-learning practitioner comes from his essays on software engineering topics, more specifically: API design and software development processes. Through his writing, Francois has educated hundreds of thousands on the topic of practical deep learning and utilization of the Python programming language for machine-learning tasks, through his infamous book Deep Learning With Python.

Through writing Data Scientists have the opportunity to enforce best practices in software development and data science processes among team members, or organizations.

Conclusion

Academic institutions covering Data Science should have writing within the course curriculum. The cultivation of writing as a habit through the years in academia proves beneficial in professional roles.

Professional Data Scientists should expand their craft by adopting writing as an integral aspect of communication of ideas, techniques, and concepts. As pointed out through the work of the AI experts mentioned in this article, written work produced can be in the form of essays, blogs, articles, and so on. Even interacting with peers and engaging in discourse on platforms such as LinkedIn or Twitter can be beneficial for Data Science professionals.

Novice Data Scientists often ask what methods can be adopted to improve skills, knowledge, and confidence, unsurprisingly the answer to that is also writing. Writing enables the expression of ideas in a structured manner that is difficult to convey through other communicative methods. Writing also serves as a method to reinforce learning.

This post is a fantastic resource of inspiration for Data Scientists looking for ideas, and if you’re feeling inspired, read this article about different sorts of writing in the field of machine learning.

Categories
Misc

An Important Skill for Data Scientists and Machine Learning Practitioners

The most important soft skill for ML practitioners and Data Scientists

Editor’s Note: If you’re interested sharing your data science and AI expertise, you can apply to write for our blog here.

Data Science as a discipline and profession demands its practitioners possess various skills, ranging from soft skills such as communication, leadership to hard skills such as deductive reasoning, algorithmic thinking, programming, and so on. But there’s a crucial skill that should be attained by Data Scientists, irrespective of their experience, and that is writing.

Even Data Scientists working in technical fields such as quantum computing, or healthcare research need to write. It takes time to develop strong writing ability, and there are challenges that Data Scientists confront that might prevent them from expressing their thoughts easily. That’s why this article contains a variety of writing strategies and explanations of how they benefit Data Science and Machine Learning professionals.

1. Short-form writing

Let’s start with the most typical accessible styles of writing we encounter. Writing in a short form is typically low effort and doesn’t take up too much time. Machine Learning and Data science contents written On Twitter, LinkedIn, Facebook, Quora, and StackOverflow, all fall into this category.

Image with a laptop and mobile phone
Figure 1: Photo by Austin Distel on Unsplash

Long-form content, such as books, articles, and essays, is usually the most valuable material in the ML field. All require time to write, read, and analyze. Short-form content on social media platforms, on the other hand, can provide information while using far less effort and time than long form content.

Currently, we have the privilege to witness discourse and ideas shared between AI pioneers and reputable machine learning practitioners, without having to wait for them to write and publish a research paper or an essay. Writing short-form posts on social media platforms provides insight into opinions and views that are not easily expressed verbally and your voice can participate and opinions shared.

For those who want to experiment with connecting with other ML experts through social media postings, I recommend following some people who post genuine and relevant information about Machine learning and Data Science. Take some time to read the tone of the discussions and contributions on posts, and if you have anything valuable to contribute, speak up.

To get you started, here is a list of individuals that post AI-related content (among other interesting things): Andrew Ng, Geoffrey Hinton, Allie, K Miller, Andrej Karpathy, Jeremy Howard, Francois Chollet, Aurélien Geron, Lex Fridman. There are plenty more individuals to follow, but content from these individuals should keep you busy for a while.

Questions/Answer platforms

Questions/Answers as a form of writing has the lowest entry barrier and does not consume as much time, depending on your ability to answer proposed questions.

Given your profession, I’m sure you’ve heard of StackOverflow, the internet’s most popular resource for engineers. When it comes to asking questions on StackOverflow, things aren’t as simple; clarity and transparency are required. Writing queries properly is such an important component of StackOverflow that they’ve published a comprehensive guide on the subject.

Here’s the key takeaway in this section: asking and answering questions on StackOverflow helps you become concise and clear when posing queries, as well as thorough when responding.

2. Emails and Messages

Image of laptop and mobile phone
Figure 2: Photo by Maxim Ilyahov on Unsplash

Writing emails and messages is nothing specific to machine learning but Data Scientists and Machine-Learning practitioners that practice the art of composing effective messages tend to flourish within corporations and teams for obvious reasons, some of which are the ability to contribute, network, and get things done.

Composing well-written messages and emails can land you a new role, get your project funded or get you into an academic institution. Purvanshi Mehta wrote an article that explores the effective methods of cold messaging individuals on LinkedIn to build networks. Purvanshi article is a step-by-step instruction on adoptable cold messaging etiquette.

3. Blogs and Articles

Many experts believe that blogs and articles have a unique role in the machine learning community. Articles are how professionals stay up to date on software releases, learn new methods, and communicate ideas.

Technical and non-technical ML articles are the two most frequent sorts of articles you’ll encounter. Technical articles are composed of descriptive text coupled with code snippets or gists that describe the implementation of particular features. Non-technical articles include more descriptive language and pictures to illustrate ideas and concepts.

4. Newsletters

Developer seating on a table and working.
Figure 3: Photo by cottonbro from Pexels

Starting and maintaining a newsletter might not be for Data scientists, but this sort of writing has shown to provide professional and financial advantages to those who are willing to put in the effort.

A newsletter is a key strategic play for DS/ML professionals to increase awareness and presence in the AI sector. A newsletter’s writing style is not defined, so you may write it however so you choose. You might start a formal, lengthy, and serious newsletter or a short, informative, and funny one.

The lesson to be drawn from this is that creating a newsletter may help you develop a personal brand in your field, business, or organization. Those who like what you do will continue to consume and promote your material.

There are a thousand reasons why you should not start a newsletter today, but to spark some inspiration, below are some ideas you can base your newsletter on, and I’ve also included some AI newsletters you should subscribe to.

Newsletter Ideas related to AI:

  • A collection of AI/ML videos to watch, with your input on each video.
  • A collection of AI/ML articles to read.
  • Job postings in your areas that job seekers might be interested in.
  • Up-to-date relevant AI news for ML practitioners interested in the more practical application of AI.

Remember that the frequency, length, and content of your newsletter are all defined by you. You could start a monthly newsletter if you feel you don’t have much time or a daily newsletter to churn out content like a machine.

Machine Learning and Data Science Newsletter to subscribe to:

5. Documentation

Developer coding, with code displayed on a monitor.
Figure 4: Photo by Sigmund on Unsplash.

Documentation, both technical and non-technical, is a common activity among software engineering occupations. Data Scientists are not exempt from the norm, and documentation that explains software code or individual features is recommended and considered best practice.

When is a project successful? Some might consider that it’s when your model achieves an acceptable accuracy on a test dataset?

Experienced Data Scientists understand that project success is influenced by a number of variables, including software maintainability, longevity, and knowledge transfer. Software documentation is a task that can improve the prospects of a project beyond the capabilities of a single team member not to mention, it provides an extra layer of software quality and maintainability.

One of the main advantages of documentation that Data Scientists should be aware of is its role in reducing queries concerning source code from new project members or novice Data Analysts. The majority of questions about source code are concerned with file locations, coding standards and best practices. This data can all be recorded once and referenced by many individuals.

Here are some ideas of items you could document

  • Code Documentation: It’s critical to standardize implementation style and format in order to guarantee uniformity across applications. This conformity makes the transition for new developers into a codebase easier since coding standards are given through code documentation.
  • Research and Analysis: Given the importance of software product features, successful development is always dependent on thorough study and analysis. Any ML expert who has worked on a project at the start will have handled the plethora of feature requests from stakeholders. Documenting information surrounding feature requests enables other parties involved in the project to get a more straightforward overview of the requirement and usefulness of the proposed feature. It also enforces the feature requester to conduct better research and analysis.
  • Database Configurations / Application Information: Documenting information particular to applications, such as configuration parameters and environment variables, is critical for any software team, especially if you move to a new job or company.
  • How-tos: Installation of software libraries and packages may be difficult, but the fact is that there could be different installation processes for various operating systems or even versions. It’s not uncommon to discover missing dependencies in official library documentation and quirks you must go through to install the program.
  • API Documentation: When teams develop internal and external APIs (Application Programming Interfaces), they should document the components of methods, functions, and data resources needed by those APIs. There’s nothing more annoying than working with a non-documented API; the whole process becomes a guessing game, and you’ll spend time researching the parameters, internal workings, and outputs of an undocumented API. Save your team and clients time by creating a smooth experience when consuming the technical resources you make.

There’s no question that extensive resources allow organizations to conduct many types of documentation, and some even hire technical writers. Although those are all viable options, it is critical for machine learning experts who wish to take software completeness seriously to practice documenting programs and software developed in order to promote the idea that they can provide thorough explanations.

A quick Google search on “how to write good software documentation” provided good resources that all shared the same messages and best practices on documentation.

6. Research Papers

Student studying in a library.
Figure 5: Photo by Ron Lach from Pexels.

In 2020, I published an article on how to read research papers, which became a huge hit. When it comes to utilizing ML algorithms and models, we have to optimize the way we read these papers in much the same way that seasoned machine-learning experts do.

Writing machine-learning research papers is the other side of the coin. I’ve never written a research paper, and I don’t intend to start now. However, some Machine-learning specialties are very concerned with writing and publishing research studies. As a metric of career success, research institutions and firms use the number of papers published by an individual or group.

There’s an art to writing research papers; researchers and scientists must think about the structure and content of the data to ensure that a message, breakthrough, or idea is delivered effectively. Most of us are probably not writing research papers anytime soon, but there’s value in adopting the practice of writing good research papers. For example, having an abstract, introduction, and conclusion is a writing structure transferable to other writing pieces.

Go ahead and read some research papers; take note of the language, structure and use of visual images the authors are using. Try and adopt any good practice you identify in your next written piece.

7. Books and E-books

A shelve of books.
Figure 6: Photo by Nick Fewings on Unsplash.

There’s no doubt that ML/DS books are the most authoritative texts on machine learning theory and hands-on expertise. I’m not suggesting that all data scientists and ML engineers should write a book. But bear with me.

I looked through several of the authors on my shelf who wrote books in AI/ML, and they all have extensive experience in their fields.

Writing non-fiction, technical books about machine learning is very difficult. It requires a high level of theoretical and practical industry knowledge that can only be attained through total immersion in study, research, and implementation. To educate hundreds of ML Engineers and Data Scientists, your reputation must be based on solid academic, commercial, or research credentials. Not to mention that writers require creativity when delivering well-written books. More specifically, they have to master the art of conveying sophisticated topics in books.

My argument is that to create a timeless machine learning book, you must go down the road of expertise. This does not sound inviting, but I’d want you to consider the fact that setting a long-term objective of writing a book will push you to delve more into the subject of machine intelligence or chosen field, which will enhance your general understanding of AI.

Books for Data Scientist and Machine Learning practitioners:

You will find that most authors listed preceding have produced the majority if not all forms of writing listed in this article, regardless of their domain specialty, hence why I consider writing a vital skill for Machine Learning practitioners and Data Scientists to master.

Conclusion

Whenever I’m asked what life decision provided me with the most benefit, either financial, academic or career, I usually answer with my decision to write.

Throughout this post, you’ve seen several advantages Data Scientists and Machine Learning experts may obtain if they write AI-related material on a regular basis. This section centralizes all the benefits listed throughout this article to make sure it all hits home.

  • ML professionals employ writing to communicate complicated subjects in a simple way. By reading a well-written blog post by Andrej Karpathy, I was able to acquire a greater appreciation for the practical application of convolutional neural networks.
  • Various types of writing can help you improve your creativity and critical thinking. I recently read AI 2041 by Kai-Fu Lee and Chen Qiufan, in which the authors examine AI technologies and their effects on human lives through well-written fictional stories and thorough explanations of AI technologies. Both writers have written for many years and have authored other books. It’s reasonable to conclude that their writing abilities allowed the writers to express future situations involving AI technology and explore the unknown societal impact of AI integration through critical and logical predictions based on current AI development.
  • Writing in the form of storytelling gives life to projects. Good stories are spoken, but great stories are written. The retelling of machine-learning projects to stakeholders such as customers, investors, or project managers takes a positive and exciting turn when coupled with the art of storytelling. A Data Scientist explaining to stakeholders why a new state-of-the-art cancer detection deep-learning model should be leveraged across federal hospitals becomes more impactful and relatable when coupled with the story of an early diagnosis of a patient.
  • Within the machine learning community, writing is a successful method of knowledge transfer. Most of the information you’ll get in the DS/ML world will be through written content. Articles, essays, and research papers are all repositories of years worth of knowledge organized into succinct chapters with clear explanations and digestible formats. Writing is an efficient way to condense years of knowledge and experience.

Did you know that AI pioneers and experts we admire and learn from also publish regularly? In this article, I compile a shortlist of individuals in the AI field and provide samples of their work, emphasizing the value and consequence of their work.

Thanks for reading.

Categories
Misc

NVIDIA Announces Financial Results for Third Quarter Fiscal 2022

NVIDIA today reported record revenue for the third quarter ended October 31, 2021, of $7.10 billion, up 50 percent from a year earlier and up 9 percent from the previous quarter, with record revenue from the company’s Gaming, Data Center and Professional Visualization market platforms.

Categories
Misc

Error when installing tensoflow…

  1. Hey guys I get this error message when i try to run
  2. import tensorflow as tf
  3. print(tf. __version__)
  4. 2021-11-17 19:57:46.733325: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘cudart64_110.dll’; dlerror: cudart64_110.dll not found 2021-11-17 19:57:46.739099: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2.7.0
  5. Can anybody help me figure out whats the problem?

submitted by /u/Davidescu-Vlad
[visit reddit] [comments]

Categories
Misc

How to use tensorflow with an AMD GPU

https://www.youtube.com/watch?v=Np11T5-_KhA

submitted by /u/limapedro
[visit reddit] [comments]

Categories
Misc

Overcoming Advanced Computing Challenges with Million-X Performance

Graphic of earth, showing climate simulation.Learn more about the many ways scientists are applying advancements in Million-X computing and solving global challenges.Graphic of earth, showing climate simulation.

At NVIDIA GTC last week, Jensen Huang laid out the vision for realizing multi-Million-X speedups in computational performance. The breakthrough could solve the challenge of computational requirements faced in data-intensive research, helping scientists further their work. 



Solving challenges with Million-X computing speedups

Million-X unlocks new worlds of potential and the applications are vast. Current examples from NVIDIA include accelerating drug discovery, accurately simulating climate change, and driving the future of manufacturing.

Drug discovery

Researchers at NVIDIA, CalTech, and the startup Entos blended machine learning and physics to create OrbNet, speeding up molecular simulations by many orders of magnitude. As a result, Entos can accelerate its drug discovery simulations by 1,000x, finishing in 3 hours what would have taken more than 3 months.

Climate change

Last week, Jensen Huang announced plans to build Earth 2, building a digital twin of the Earth in Omniverse. The world’s most powerful AI supercomputer will be dedicated to simulating climate models that predict the impacts of global warming in different places across the globe. Understanding these changes over time can help humanity plan for and mitigate these changes at a regional level. 

Future manufacturing

The earth is not the first digital twin project enabled by NVIDIA. Researchers are already building physically accurate digital twins of cities and factories. The simulation frontier is still young and full of potential, waiting for the catalyst mass increases in computing will provide.

Share your Million-X challenge

Share how you are using Million-X computing on Facebook, LinkedIn, or Twitter using #MyMillionX and tagging @NVIDIAHPCDev.

The NVIDIA developer community is already changing the world, using technology to solve difficult challenges. Join the community. >>

Below are a handful of notable examples.

The community that is changing the world

Smart waterways, safer public transit, and eco-monitoring in Antarctica 

Dr. Johan Barthelemy standing behind a datacenter computer with exposed wires.
Figure 1. Dr. Johan Barthelemy.

The work of Johan Barthelemy is interdisciplinary and covers a variety of industries. As the head of the University of Wollongong’s Digital Living Lab, he aims to deliver innovative AIoT solutions that champion ethical and privacy-compliant AI. 

Currently, Barthelemy is working on an assortment of projects including a smart waterways computer vision application that detects stormwater blockage in real-time, helping cities prevent city-wide issues. 

Another project, currently being deployed in multiple cities is AI camera software, which detects and reports violence on Sydney trains through aggressive stance modeling. 

An AIoT platform for remotely monitoring Antarctica’s terrestrial environment is also in the works. Built around an NVIDIA Jetson Xavier NX edge computer, the platform will be used to monitor the evolution of moss beds—their health being an early indicator of the impact of climate change. The data collected will also inform a variety of models developed by the Securing Antarctica’s Environmental Future community of researchers, in particular hydrology and microclimate models.

Connect: LinkedIn | Twitter | Digital Living Lab

Never-before-seen views of SARS-CoV-2

A close up view and breakdown of the DELTA SARS-CoV-2 atom.
Figure 2. Atomic view of DELTA SARS CoV-2.

NVIDIA researchers and 14 partners successfully developed a platform to explore the composition, structure, and dynamics of aerosols and aerosolized viruses at the atomic level. 

This work surmounts the previously limited ability to examine aerosols at the atomic and molecular level, obscuring our understanding of airborne transmission. Leveraging the platform, the team produced a series of novel discoveries regarding the SARS-CoV-2 Delta variant.

These breakthroughs dramatically extend the capabilities of multiscale computational microscopy in experimental methods. The full impact of the project has yet to be realized.

Species recognition, environmental monitoring, and adaptive streaming

Dr. Albert Bifet smiling at the camera in a lit open space.
Figure 3. Dr. Albert Bifet.

Dr. Albert Bifet is the Director of the Te Ipu o te Mahara, The Artificial Intelligence Institute at the University of Waikato, and Professor of Big Data at Télécom Paris, Institute.

Bifet also leads the TAIAO project, a data science program using an NVIDIA DGX A100 to build deep learning models on species recognition. He is codeveloping a new machine-learning library in Python called River for online/streaming machine learning, and building a new data repository to improve reproducibility in environmental data science.

Additionally, researchers at TAIAO are building new approaches to compute GPU-based SHAP values for XGBoost, and developing a new adaptive streaming XGBoost.

Connect: Website | LinkedIn | Twitter

Medical imaging, therapy robots, and NLP depression detection

Figure 4. Robots developed for medical use.

The current interests of Dr. Ekapol Chuangsuwanich fall within the medical imaging domain, including chest x-ray and histopathology technology. However, over the past few years his work has spanned across many industries including NLP, ASR, and medical imaging. 

Last year, Chuangsuwanich and his team developed the PYLON architecture, which can learn precise pixel-level object location with only image-level annotation. This is deployed across hospitals in Thailand to provide rapid COVID-19 severity assessments and to facilitate screening of tuberculosis in high-risk communities.

Additionally, he is working on NLP and ASR robots for medical use, including a speech therapy helper and call center robot with depression detection functionality. His startup, Gowajee, is also providing state-of-the-art ASR and TTS for the Thai language. These projects have been created using the NVIDIA NeMo framework and deployed on NVIDIA Jetson Nano devices.

Connect: Website | Org | Facebook

Trillion atom quantum-accurate molecular dynamics simulations

An animation of floating orbs as observed tracked to an image of multiple fingerprints.
Figure 5. Atomic radial cutoff used to generate descriptors, represented as fingerprints.

Researchers from the University of South Florida, NVIDIA, Sandia National Labs, NERSC, and the Royal Institute of Technology collaborated to produce a LAMMPS trained machine learning kernel with interatomic potentials named SNAP (Spectral Neighborhood Analysis Potential).

SNAP was found to be accurate across a huge pressure-temperature range, from 0-50Mbars or 300-20,000 Kelvin. The peak Molecular Dynamic performance was greater than 22x the previous record—done on a 20-billion-atom system, and simulated on Summit for 1ns in a day.

The project qualified as a Gordon Bell Prize finalist, and the near perfect weak scaling of SNAP MD highlights the potential to launch quantum-accurate MD to trillion atom simulations on upcoming exascale platforms. This dramatically expands the scientific return of X-ray free electron laser diffraction experiments.

BioInformatics, smart cities, and translational research

Dr. Ng See-Kiong smiling at the camera.
Figure 6. Dr. Ng See-Kiong.

Dr. Ng See-Kion is constantly in search of big data. A practicing data scientist, See-Kion is also a Professor of Practice and Director of Translational Research at the National University of Singapore.

Currently projects on his desk leverage the NVIDIA NeMo framework covering NLP for indigenous and vernacular languages across Singapore and New Zealand. See-Kion is also working on intelligent COVID-19 contact tracing and outbreak, intelligent social event sensing, and assessing the credibility of information in new media.

Connect: Website