Categories
Misc

Omniverse User Group Spotlights Talented Community Members

A dirty 80's style teenager's room with video games strewn about. Graphic created using Omniverse.At NVIDIA GTC, the Omniverse User Group held its 2nd meeting, focusing on developers and users of the NVIDIA open platform for collaboration and simulation. A dirty 80's style teenager's room with video games strewn about. Graphic created using Omniverse.

Capping off a week of major announcements including the NVIDIA Omniverse Avatar, and Earth-2 Supercomputer at NVIDIA GTC last week, the community team hosted the second Omniverse User Group. 

Excited participants logged in from across the globe to hear about the future of the platform from the NVIDIA Omniverse leadership team. Participants also got a sneak peek of upcoming features and releases through presentations from partners and community members showcasing their inspiring work. 

The event culminated with an announcement of the latest contest winners, along with the first Ambassador and Omniverse Machinima expert, Pekka Varis from Catchline. Varis won the title of ambassador by helping and sharing his great knowledge of the platform with others on the forums and Discord server.

Afterward, the party migrated to the official Discord server, where the community had a blast chatting, answering questions, and learning about what excited users the most about the future of the Omniverse. 

Highlights include:

Watch the second NVIDIA Omniverse User Group

A tiled graphic of 6 headshots of user group members at the meeting.
Figure 1. NVIDIA Omniverse User Group members.

Share your work

As livestream cohost and Omniverse Community Manager, Wendy Gram, often says, “the community’s amazing work in the Omniverse inspires us every single day.” 

If you are interested in presenting to the community at a User Group meeting, in a post, or on our weekly livestream, reach out through Discord (Prof E#2041) or e-mail

We also invite you to share your work. Tag us on social media using the #NVIDIAOmniverse, or submit to the Omniverse Gallery

We look forward to seeing you in the Omniverse or at our next events. Please follow us for the latest updates.

Connect with us:




Categories
Misc

A GFN Thursday Deal: Get ‘Crysis Remastered’ Free With Any Six-Month GeForce NOW Membership

You’ve reached your weekly gaming checkpoint. Welcome to a positively packed GFN Thursday. This week delivers a sweet deal for gamers ready to upgrade their PC gaming from the cloud. With any new, paid six-month Priority or GeForce NOW RTX 3080 subscription, members will receive Crysis Remastered for free for a limited time. Gamers and Read article >

The post A GFN Thursday Deal: Get ‘Crysis Remastered’ Free With Any Six-Month GeForce NOW Membership appeared first on The Official NVIDIA Blog.

Categories
Misc

Keras for R is back!

For a while, it may have seemed that Keras for R was in some undecidable state, like Schrödinger’s cat before inspection. It is high time to correct that impression. Keras for R is back, with two recent releases adding powerful capabilities that considerably lighten previously tedious tasks. This post provides a high-level overview. Future posts will go into more detail on some of the most helpful new features, as well as dive into the powerful low-level enhancements that make the former possible.

Categories
Offsites

Keras for R is back!

For a while, it may have seemed that Keras for R was in some undecidable state, like Schrödinger’s cat before inspection. It is high time to correct that impression. Keras for R is back, with two recent releases adding powerful capabilities that considerably lighten previously tedious tasks. This post provides a high-level overview. Future posts will go into more detail on some of the most helpful new features, as well as dive into the powerful low-level enhancements that make the former possible.

Categories
Misc

Get 50% Off Upcoming Hands-On Training from NVIDIA

woman at a laptopRegister now for instructor-led workshops from the NVIDIA Deep Learning Institute. woman at a laptop

Get hands-on training in AI, deep learning, accelerated computing, and data science with the NVIDIA Deep Learning Institute (DLI). DLI offers self-paced, online courses as well as instructor-led online workshops. Whether you are a developer, data scientist, professor, or student, there is a course for you within DLI. Learners who complete the courses and workshops also can earn an NVIDIA DLI certificate to demonstrate subject-matter competency and support career growth. 

Full-day workshops offer a comprehensive learning experience that includes hands-on exercises and guidance from expert instructors certified by DLI. 

Receive half-off registration for the following workshops: 

Fundamentals of Accelerated Computing with CUDA C/C++

Learn how to accelerate and optimize existing C/C++ CPU-only applications to leverage the power of GPUs using the most essential CUDA techniques and the Nsight™ Systems profiler.

  • Thursday, Nov. 18, 7:00 a.m. – 3:00 p.m. PST
  • Use code DLISC21 to receive half-off full-price registration.

Building Transformer-Based Natural Language Processing Applications

Learn how to use transformer-based natural language processing models for text classification tasks, such as categorizing documents. You’ll also get insight on how to use transformer-based models for named-entity recognition (NER) tasks and more.

  • Monday, Dec. 6, 9:00 a.m. – 5:00 p.m. CET
  • Use code DLIGTC50 to receive half-off full-price registration

Applications of AI for Predictive Maintenance

Learn how to identify anomalies and failures in time-series data, estimate the remaining useful life of the corresponding parts, and use this information to map anomalies to failure conditions.

  • Tuesday, Dec. 14, 9:00 a.m. – 5:00 p.m. CET
  • Use code DLIGTC50 to receive half-off full-price registration

Take advantage of the discounted codes. Space is limited, register now. >>

Visit the DLI website for details on each course and the full schedule of upcoming instructor-led workshops, which is regularly updated with new training opportunities. Also, check out our catalog of self-paced online courses.

Categories
Offsites

Predicting Text Readability from Scrolling Interactions

Illiteracy affects at least 773 million people globally, both young and old. For these individuals, reading information from unfamiliar sources or on unfamiliar topics can be extremely difficult. Unfortunately, these inequalities have been further magnified by the global pandemic as a result of unequal access to education in reading and writing. In fact, UNESCO reports that over 100 million children are falling behind the minimum proficiency level in reading due to COVID-related school closures.

With increasing world-wide access to technology, reading on a device, such as a tablet or phone, has largely taken the place of traditional formats. This provides a unique opportunity to observe reading interactions, e.g., how a reader scrolls through a text, which can inform our understanding of what can make text difficult to read. This understanding is crucial when designing educational applications for low-proficiency readers and language learners, because it can be used to match learners with appropriately leveled texts as well as to support readers in understanding texts beyond their reading level.

In “Predicting Text Readability from Scrolling Interactions”, presented at CoNLL 2021, we show that data from on-device reading interactions can be used to predict how readable a text is. This novel approach provides insights into subjective readability — whether an individual reader has found a text accessible — and demonstrates that existing readability models can be improved by including feedback from scroll-based reading interactions. In order to encourage research in this area and to help enable more personalized tools for language learning and text simplification, we are releasing the dataset of reading interactions generated from our scrolling behavior–based readability assessment of English-language texts.

Understanding Text Difficulty
There are multiple aspects of a text that impact how difficult it is to read, including the vocabulary level, the syntactic structure, and overall coherence. Traditional machine learning approaches to measure readability have exclusively relied on such linguistic features. However, using these features alone does not work well for online content, because such content often contains abbreviations, emojis, broken text, and short passages, which detrimentally impact the performance of readability models.

To address this, we investigated whether aggregate data about the reading interactions of a group can be used to predict how difficult a text is, as well as how reading interactions may differ based on a readers’ understanding. When reading on a device, readers typically interact with text by scrolling in a vertical fashion, which we hypothesize can be used as a coarse proxy for reading comprehension. With this in mind, we recruited 518 paid participants and asked them to read English-language texts of different difficulty levels. We recorded the reading interactions by measuring different features of the participants’ scrolling behavior, such as the speed, acceleration and number of times areas of text were revisited. We then used this information to produce a set of features for a readability classifier.

Predicting Text Difficulty from Scrolling Behavior
We investigated which types of scrolling behaviors were most impacted by text difficulty and tested the significance using linear mixed effect models. In our set up, we have repeated measures, as multiple participants read the same texts and each participant reads more than one text. Using linear mixed-effect models gives us a higher confidence that the differences in interactions we are observing are because of the text difficulty, and not other random effects.

Our results showed that multiple reading behaviors differed significantly based on the text level, for example, the average, maximum and minimum acceleration of scrolling. We found the most significant features to be the total read time and the maximum reading speeds.

We then used these features as inputs to a machine learning algorithm. We designed and trained a support vector machine (i.e., a binary classifier) to predict whether a text is either advanced or elementary based only on scrolling behaviors as individuals interacted with it. The dataset on which the model was trained contains 60 articles, each of which were read by an average of 17 participants. From these interactions we produced aggregate features by taking the mean of the significant measures across participants.

 

We measured the accuracy of the approach using a metric called f-score, which measures how accurate the model is at classifying a text as either “easy” or “difficult” (where 1.0 reflects perfect classification accuracy). We are able to achieve an f-score of 0.77 on this task, using interaction features alone. This is the first work to show that it is possible to predict the readability of a text using only interaction features.

Improving Readability Models
In order to demonstrate the value of applying readability measures from scrolling behaviors to existing readability models, we integrated scroll-based features into the state-of-the-art automated readability assessment tool, which was released as part of the OneStopEnglish corpus. We found that the addition of interaction features improves the f-score of this model from 0.84 to 0.88. In addition, we were able to significantly outperform this system by using interaction information with simple vocabulary features, such as the number of words in the text, achieving an impressive f-score of 0.96.

In our study, we recorded comprehension scores to evaluate the understanding and readability of text for individuals. Participants were asked three questions per article to assess the reader’s understanding of what they had read. The interaction features of an individual’s scrolling behavior was represented as a high dimensional vector. To explore this data, we visualized the reading interaction features for each participant using t-distributed stochastic neighbor embeddings, which is a statistical method for visualizing high-dimensional data. The results revealed clusters in the comprehension score based on how well individuals understood the text. This shows that there is implicit information in reading interactions about the likelihood that an individual has understood a given text. We refer to this phenomenon as subjective readability. This information can be very useful for educational applications or for simplifying online content.

Plot showing t-SNE projection of scroll interactions in 2-dimensions. The color of each data point corresponds to the comprehension score. Clusters of comprehension scores indicate that there are correlations between reading behaviors and comprehension.

Finally, we investigated the extent to which reading interactions vary across audiences. We compared the average scrolling speed across different reader groups, covering reading proficiency and the reader’s first language. We found that the speed distribution varies depending on the proficiency and first language of the audience. This supports the case that first language and proficiency alter the reading behaviors of audiences, which allows us to contextualize the reading behavior of groups and better understand which areas of text may be harder for them to read.

Histogram showing the average speeds of scrolling (in vertical pixels per millisecond) across readers of different proficiency levels (beginner, intermediate and advanced), with lines showing the smoothed trend for each group. A higher average scroll speed indicates faster reading times. For example, a more challenging text that corresponds to slower scroll speeds by advanced readers is associated with higher scroll speeds by beginners because they engage with the text only superficially.

Histogram showing the average speeds of scrolling (in vertical pixels per millisecond) across audiences by first language of the readers, Tamil or English, with lines showing the smoothed trend for each group. A higher average scroll speed indicates faster reading times. Dark blue bars are where the histograms overlap.

Conclusion
This work is the first to show that reading interactions, such as scrolling behavior, can be used to predict the readability of text, which can yield numerous benefits. Such measures are language agnostic, unobtrusive, and robust to noisy text. Implicit user feedback allows insight into readability at an individual level, thereby allowing for a more inclusive and personalisable assessment of text difficulty. Furthermore, being able to judge the subjective readability of text benefits language learning and educational apps. We conducted a 518 participant study to investigate the impact of text readability on reading interactions and are releasing a novel dataset of the associated reading interactions. We confirm that there are statistically significant differences in the way that readers interact with advanced and elementary texts, and that the comprehension scores of individuals correlate with specific measures of scrolling interaction. For more information our conference presentation is available to view.

Acknowledgements
We thank our collaborators Yevgeni Berzak, Tony Mak and Matt Sharifi, as well as Dmitry Lagun and Blaise Aguera y Arcas for their helpful feedback on the paper.

Categories
Misc

NVIDIA Announces Financial Results for Third Quarter Fiscal 2022

NVIDIA today reported record revenue for the third quarter ended October 31, 2021, of $7.10 billion, up 50 percent from a year earlier and up 9 percent from the previous quarter, with record revenue from the company’s Gaming, Data Center and Professional Visualization market platforms.

Categories
Misc

An Important Skill for Data Scientists and Machine Learning Practitioners

The most important soft skill for ML practitioners and Data Scientists

Editor’s Note: If you’re interested sharing your data science and AI expertise, you can apply to write for our blog here.

Data Science as a discipline and profession demands its practitioners possess various skills, ranging from soft skills such as communication, leadership to hard skills such as deductive reasoning, algorithmic thinking, programming, and so on. But there’s a crucial skill that should be attained by Data Scientists, irrespective of their experience, and that is writing.

Even Data Scientists working in technical fields such as quantum computing, or healthcare research need to write. It takes time to develop strong writing ability, and there are challenges that Data Scientists confront that might prevent them from expressing their thoughts easily. That’s why this article contains a variety of writing strategies and explanations of how they benefit Data Science and Machine Learning professionals.

1. Short-form writing

Let’s start with the most typical accessible styles of writing we encounter. Writing in a short form is typically low effort and doesn’t take up too much time. Machine Learning and Data science contents written On Twitter, LinkedIn, Facebook, Quora, and StackOverflow, all fall into this category.

Image with a laptop and mobile phone
Figure 1: Photo by Austin Distel on Unsplash

Long-form content, such as books, articles, and essays, is usually the most valuable material in the ML field. All require time to write, read, and analyze. Short-form content on social media platforms, on the other hand, can provide information while using far less effort and time than long form content.

Currently, we have the privilege to witness discourse and ideas shared between AI pioneers and reputable machine learning practitioners, without having to wait for them to write and publish a research paper or an essay. Writing short-form posts on social media platforms provides insight into opinions and views that are not easily expressed verbally and your voice can participate and opinions shared.

For those who want to experiment with connecting with other ML experts through social media postings, I recommend following some people who post genuine and relevant information about Machine learning and Data Science. Take some time to read the tone of the discussions and contributions on posts, and if you have anything valuable to contribute, speak up.

To get you started, here is a list of individuals that post AI-related content (among other interesting things): Andrew Ng, Geoffrey Hinton, Allie, K Miller, Andrej Karpathy, Jeremy Howard, Francois Chollet, Aurélien Geron, Lex Fridman. There are plenty more individuals to follow, but content from these individuals should keep you busy for a while.

Questions/Answer platforms

Questions/Answers as a form of writing has the lowest entry barrier and does not consume as much time, depending on your ability to answer proposed questions.

Given your profession, I’m sure you’ve heard of StackOverflow, the internet’s most popular resource for engineers. When it comes to asking questions on StackOverflow, things aren’t as simple; clarity and transparency are required. Writing queries properly is such an important component of StackOverflow that they’ve published a comprehensive guide on the subject.

Here’s the key takeaway in this section: asking and answering questions on StackOverflow helps you become concise and clear when posing queries, as well as thorough when responding.

2. Emails and Messages

Image of laptop and mobile phone
Figure 2: Photo by Maxim Ilyahov on Unsplash

Writing emails and messages is nothing specific to machine learning but Data Scientists and Machine-Learning practitioners that practice the art of composing effective messages tend to flourish within corporations and teams for obvious reasons, some of which are the ability to contribute, network, and get things done.

Composing well-written messages and emails can land you a new role, get your project funded or get you into an academic institution. Purvanshi Mehta wrote an article that explores the effective methods of cold messaging individuals on LinkedIn to build networks. Purvanshi article is a step-by-step instruction on adoptable cold messaging etiquette.

3. Blogs and Articles

Many experts believe that blogs and articles have a unique role in the machine learning community. Articles are how professionals stay up to date on software releases, learn new methods, and communicate ideas.

Technical and non-technical ML articles are the two most frequent sorts of articles you’ll encounter. Technical articles are composed of descriptive text coupled with code snippets or gists that describe the implementation of particular features. Non-technical articles include more descriptive language and pictures to illustrate ideas and concepts.

4. Newsletters

Developer seating on a table and working.
Figure 3: Photo by cottonbro from Pexels

Starting and maintaining a newsletter might not be for Data scientists, but this sort of writing has shown to provide professional and financial advantages to those who are willing to put in the effort.

A newsletter is a key strategic play for DS/ML professionals to increase awareness and presence in the AI sector. A newsletter’s writing style is not defined, so you may write it however so you choose. You might start a formal, lengthy, and serious newsletter or a short, informative, and funny one.

The lesson to be drawn from this is that creating a newsletter may help you develop a personal brand in your field, business, or organization. Those who like what you do will continue to consume and promote your material.

There are a thousand reasons why you should not start a newsletter today, but to spark some inspiration, below are some ideas you can base your newsletter on, and I’ve also included some AI newsletters you should subscribe to.

Newsletter Ideas related to AI:

  • A collection of AI/ML videos to watch, with your input on each video.
  • A collection of AI/ML articles to read.
  • Job postings in your areas that job seekers might be interested in.
  • Up-to-date relevant AI news for ML practitioners interested in the more practical application of AI.

Remember that the frequency, length, and content of your newsletter are all defined by you. You could start a monthly newsletter if you feel you don’t have much time or a daily newsletter to churn out content like a machine.

Machine Learning and Data Science Newsletter to subscribe to:

5. Documentation

Developer coding, with code displayed on a monitor.
Figure 4: Photo by Sigmund on Unsplash.

Documentation, both technical and non-technical, is a common activity among software engineering occupations. Data Scientists are not exempt from the norm, and documentation that explains software code or individual features is recommended and considered best practice.

When is a project successful? Some might consider that it’s when your model achieves an acceptable accuracy on a test dataset?

Experienced Data Scientists understand that project success is influenced by a number of variables, including software maintainability, longevity, and knowledge transfer. Software documentation is a task that can improve the prospects of a project beyond the capabilities of a single team member not to mention, it provides an extra layer of software quality and maintainability.

One of the main advantages of documentation that Data Scientists should be aware of is its role in reducing queries concerning source code from new project members or novice Data Analysts. The majority of questions about source code are concerned with file locations, coding standards and best practices. This data can all be recorded once and referenced by many individuals.

Here are some ideas of items you could document

  • Code Documentation: It’s critical to standardize implementation style and format in order to guarantee uniformity across applications. This conformity makes the transition for new developers into a codebase easier since coding standards are given through code documentation.
  • Research and Analysis: Given the importance of software product features, successful development is always dependent on thorough study and analysis. Any ML expert who has worked on a project at the start will have handled the plethora of feature requests from stakeholders. Documenting information surrounding feature requests enables other parties involved in the project to get a more straightforward overview of the requirement and usefulness of the proposed feature. It also enforces the feature requester to conduct better research and analysis.
  • Database Configurations / Application Information: Documenting information particular to applications, such as configuration parameters and environment variables, is critical for any software team, especially if you move to a new job or company.
  • How-tos: Installation of software libraries and packages may be difficult, but the fact is that there could be different installation processes for various operating systems or even versions. It’s not uncommon to discover missing dependencies in official library documentation and quirks you must go through to install the program.
  • API Documentation: When teams develop internal and external APIs (Application Programming Interfaces), they should document the components of methods, functions, and data resources needed by those APIs. There’s nothing more annoying than working with a non-documented API; the whole process becomes a guessing game, and you’ll spend time researching the parameters, internal workings, and outputs of an undocumented API. Save your team and clients time by creating a smooth experience when consuming the technical resources you make.

There’s no question that extensive resources allow organizations to conduct many types of documentation, and some even hire technical writers. Although those are all viable options, it is critical for machine learning experts who wish to take software completeness seriously to practice documenting programs and software developed in order to promote the idea that they can provide thorough explanations.

A quick Google search on “how to write good software documentation” provided good resources that all shared the same messages and best practices on documentation.

6. Research Papers

Student studying in a library.
Figure 5: Photo by Ron Lach from Pexels.

In 2020, I published an article on how to read research papers, which became a huge hit. When it comes to utilizing ML algorithms and models, we have to optimize the way we read these papers in much the same way that seasoned machine-learning experts do.

Writing machine-learning research papers is the other side of the coin. I’ve never written a research paper, and I don’t intend to start now. However, some Machine-learning specialties are very concerned with writing and publishing research studies. As a metric of career success, research institutions and firms use the number of papers published by an individual or group.

There’s an art to writing research papers; researchers and scientists must think about the structure and content of the data to ensure that a message, breakthrough, or idea is delivered effectively. Most of us are probably not writing research papers anytime soon, but there’s value in adopting the practice of writing good research papers. For example, having an abstract, introduction, and conclusion is a writing structure transferable to other writing pieces.

Go ahead and read some research papers; take note of the language, structure and use of visual images the authors are using. Try and adopt any good practice you identify in your next written piece.

7. Books and E-books

A shelve of books.
Figure 6: Photo by Nick Fewings on Unsplash.

There’s no doubt that ML/DS books are the most authoritative texts on machine learning theory and hands-on expertise. I’m not suggesting that all data scientists and ML engineers should write a book. But bear with me.

I looked through several of the authors on my shelf who wrote books in AI/ML, and they all have extensive experience in their fields.

Writing non-fiction, technical books about machine learning is very difficult. It requires a high level of theoretical and practical industry knowledge that can only be attained through total immersion in study, research, and implementation. To educate hundreds of ML Engineers and Data Scientists, your reputation must be based on solid academic, commercial, or research credentials. Not to mention that writers require creativity when delivering well-written books. More specifically, they have to master the art of conveying sophisticated topics in books.

My argument is that to create a timeless machine learning book, you must go down the road of expertise. This does not sound inviting, but I’d want you to consider the fact that setting a long-term objective of writing a book will push you to delve more into the subject of machine intelligence or chosen field, which will enhance your general understanding of AI.

Books for Data Scientist and Machine Learning practitioners:

You will find that most authors listed preceding have produced the majority if not all forms of writing listed in this article, regardless of their domain specialty, hence why I consider writing a vital skill for Machine Learning practitioners and Data Scientists to master.

Conclusion

Whenever I’m asked what life decision provided me with the most benefit, either financial, academic or career, I usually answer with my decision to write.

Throughout this post, you’ve seen several advantages Data Scientists and Machine Learning experts may obtain if they write AI-related material on a regular basis. This section centralizes all the benefits listed throughout this article to make sure it all hits home.

  • ML professionals employ writing to communicate complicated subjects in a simple way. By reading a well-written blog post by Andrej Karpathy, I was able to acquire a greater appreciation for the practical application of convolutional neural networks.
  • Various types of writing can help you improve your creativity and critical thinking. I recently read AI 2041 by Kai-Fu Lee and Chen Qiufan, in which the authors examine AI technologies and their effects on human lives through well-written fictional stories and thorough explanations of AI technologies. Both writers have written for many years and have authored other books. It’s reasonable to conclude that their writing abilities allowed the writers to express future situations involving AI technology and explore the unknown societal impact of AI integration through critical and logical predictions based on current AI development.
  • Writing in the form of storytelling gives life to projects. Good stories are spoken, but great stories are written. The retelling of machine-learning projects to stakeholders such as customers, investors, or project managers takes a positive and exciting turn when coupled with the art of storytelling. A Data Scientist explaining to stakeholders why a new state-of-the-art cancer detection deep-learning model should be leveraged across federal hospitals becomes more impactful and relatable when coupled with the story of an early diagnosis of a patient.
  • Within the machine learning community, writing is a successful method of knowledge transfer. Most of the information you’ll get in the DS/ML world will be through written content. Articles, essays, and research papers are all repositories of years worth of knowledge organized into succinct chapters with clear explanations and digestible formats. Writing is an efficient way to condense years of knowledge and experience.

Did you know that AI pioneers and experts we admire and learn from also publish regularly? In this article, I compile a shortlist of individuals in the AI field and provide samples of their work, emphasizing the value and consequence of their work.

Thanks for reading.

Categories
Misc

AI Pioneers Write So Should Data Scientists

Data Scientists role in producing AI related written content to be consumed by the public

Editor’s Note: If you’re interested sharing your data science and AI expertise, you can apply to write for our blog here.

Primarily the dual purpose of writing has been to preserve and transfer knowledge across communities, organizations, and so on. Writing within the machine-learning domain is used for the sole purposes mentioned. There are prominent individuals that have placed immense time and effort in advancing the frontier of machine learning and AI as a field. Coincidentally, a good number of these AI pioneers and experts write a lot.

This article includes individuals that have contributed to the wider field of AI in different shapes and forms, especially, emphasizing their written work. The contribution each individual has provided through the practice of writing AI-related content.

The essential takeaway from this article is that as Data Scientists it’s a requirement that we develop soft skills such as creative and critical thinking, alongside communication. Writing is an activity that cultivates the critical soft skills for Data Scientists.

AI Experts That Write

Andrej Karpathy

At the time of writing, Andrej Karpathy is Senior Director of AI at Tesla. Overseeing engineering and research efforts on bringing commercial autonomous vehicles to market, by using massive artificial neural networks trained with millions of image and video samples.

Andrej is a prominent writer. His work has featured in top publications such as Forbes, MIT Technology Review, Fast Company, and Business Insider. Specifically, I’ve been following Andrej’s writing through his Medium profile and his blog.

In my time as a Computer Vision student exploring the fundamentals of convolutional neural networks, Andrej’s Deep learning course at Stanford proved instrumental in gaining an understanding and intuition of the internal structure of a convolutional neural network. Specifically, the written content of the course explored details such as the distribution of parameters across the CNN, the operations of the different layers within the CNN architecture and the convolution operation that occurs between CNN’s filter parameters, and the values of an input image. Andrej uses his writing to present new ideas, explore the state of deep learning, and educate others.

Data Scientists are intermediaries between the world of numerical representations of data and project stakeholders, therefore the ability to interpret and convey derived understanding from datasets is essential to Data Scientists. Writing is one means of communication that equips Data scientists with the capability to convey and present ideas, patterns, and learning from data. Andrej’s writing is a clear example of how this is done. He provides a clear and concise-written explanation of neural network architecture, data preparation processes and many more.

Kai-Fu Lee

Kai-Fu Lee is an AI and Data Science Expert. He has contributed significantly to AI through his work at Google, Microsoft, Apple, and other organizations.

He’s currently CEO of Sinovation Ventures. Kai-Fu has been making significant contributions to AI research by applying Artificial Intelligence in video analysis, computer vision, pattern recognition, and so on. Furthermore, Kai-Fu Lee has written books exploring the global players of AI and the future utilization and impact of AI in his book AI Superpowers and AI 2041.

Through his writing, Kai-Fu Lee dissects the strategies of nations and entities that operate abundantly within the AI domain. The communication of decisions, mindset, and national efforts that drive the AI superpowers of today is crucial to the developing nations seeking to fast-track the development of AI technologies.

However, Kai-Fu Lee also conveys the potential disadvantages that the advancement of AI technologies can have on societies and individuals through his writing as well. By reading Kai-Fu Lee’s written contents, I’ve been able to understand how deep learning and predictive models can affect daily human lives when their usability is projected into imaginative future scenarios that touch on societal issues such as bias, poverty, discrimination, inequality, and so on.

The “dangers of AI” is a discourse that’s held more frequently as the adoption of AI technology and data-fueled algorithms become commonplace within our mobile devices, appliances, and processes. Data Scientists are ushering in the future one model at a time and it’s our responsibility to ensure that. We are able to communicate the fact that we conduct an in-depth cost-benefit analysis of technologies before they are integrated into society. These considerations put the mind of consumers at ease, by ensuring that the positive and negative impact of AI technology is not just afterthoughts to Data Scientists. 

An effective method of communicating the previously mentioned considerations for Data Scientists is through writing. There’s effectiveness in writing a post or two explaining the data source, network architectures, algorithms, and extrapolated future utilization of AI applications or predictive models based on current utilization. A Data Scientist that covers these steps as part of their process establishes a sense of accountability and trust within product consumers and largely, the community.

Francois Chollet

TensorFlow and Keras are two primary libraries that are used extensively within data science and machine-learning projects. If you use any of these libraries, then Francois Chollet is probably an individual within AI you’ve come across.

Francois Chollet is an AI researcher that currently works as a Software Engineer at Google. He’s recognized as the creator of the deep-learning library Keras and also a prominent contributor to the TensorFlow library. And no surprise here, he writes.

Through his writing, Francois has expressed his thoughts on concerns, predictions, and limitations of AI. The impact of Francois writing on me as a machine-learning practitioner comes from his essays on software engineering topics, more specifically: API design and software development processes. Through his writing, Francois has educated hundreds of thousands on the topic of practical deep learning and utilization of the Python programming language for machine-learning tasks, through his infamous book Deep Learning With Python.

Through writing Data Scientists have the opportunity to enforce best practices in software development and data science processes among team members, or organizations.

Conclusion

Academic institutions covering Data Science should have writing within the course curriculum. The cultivation of writing as a habit through the years in academia proves beneficial in professional roles.

Professional Data Scientists should expand their craft by adopting writing as an integral aspect of communication of ideas, techniques, and concepts. As pointed out through the work of the AI experts mentioned in this article, written work produced can be in the form of essays, blogs, articles, and so on. Even interacting with peers and engaging in discourse on platforms such as LinkedIn or Twitter can be beneficial for Data Science professionals.

Novice Data Scientists often ask what methods can be adopted to improve skills, knowledge, and confidence, unsurprisingly the answer to that is also writing. Writing enables the expression of ideas in a structured manner that is difficult to convey through other communicative methods. Writing also serves as a method to reinforce learning.

This post is a fantastic resource of inspiration for Data Scientists looking for ideas, and if you’re feeling inspired, read this article about different sorts of writing in the field of machine learning.

Categories
Misc

How to use tensorflow with an AMD GPU

https://www.youtube.com/watch?v=Np11T5-_KhA

submitted by /u/limapedro
[visit reddit] [comments]