top of page

Powerful New Tools Meeting Challenge of AI Software Development, Requiring Smooth Linking of Multipl


Coders are busy these days as entire software infrastructures transition to the development and deployment of applications incorporating AI. Thankfully, there are powerful tools to help.

Google Cloud, for example has recently added to its AI Hub launched in April in response to concerns about reducing redundant AI development efforts, and managing a growing number of machine learning tools. The added collaboration tools are aimed at promoting greater collaboration of data science and machine learning developers, as they manage their pipelines and trained models, as described in an account in Enterprise AI.

Enhancements to the hub are said to allow great sharing of trained ML models and pipelines from the Kubeflow workflow automation tool. Permissions can be better managed to run, for example, deep learning tasks on the Kubernetes cluster orchestrator. The hub includes models from NVIDIA and other AI developers.

Google noted in a blog post, “Since releasing AI Hub, we’ve learned a lot about the challenges our first beta customers face bridging gaps and silos in ML projects. These new features are a direct result of these ongoing conversations and aim to make it easier to get started with any ML project by building on the great work of others.”

Acquired last year by Microsoft, GitHub offers a similar service called AI Lab.

One Second of Processing Broken Down

What happens in one second of processing an AI application, as a series of linked AI modules are kicked off? Amedee Potier, CTO of Konverso, outlines the process in a recent account in Medium/The Startup.

Konverso is a startup in the chatbot market. Potier has over 25 years of experience toying with AI at Rocket Software and before that, at the Thales research center in Paris. There he worked with, among others, Yann Le Cun, computer scientist known for his work with convolutional neural networks.

Discussing what AI is and is not these days, he points out a disconnect. “It is striking how most are still thinking of AI as one brain engine… AI is not about one brain, it is about numerous mini-brains, each focused on a single, very well-defined task,” he states.

He then describes what happens in one second of processing when someone calls in on the phone and engages with the company’s chatbot.

The user’s voice is sent to a Speech to Text engine, built on deep learning models with accuracy exceeding what is possible by humans. Nuance and some other suppliers are providing the engine, which interestingly, he says “is not yet in the reach of the open-source community.”

The bot may then access a translation service, if required. These are powered on deep neural network models, themselves populated with high quality translated texts. Players in this market include Yandex.

The bot then extracts Named Entities (such as people, numeric values), identifies the Part of Speech, runs Syntaxis forms, then uses Machine Learning to identify an intent.

Workflows, a defined set of procedures, will be associated with the intent using a range of AI tools and techniques. These include: Machine Learning classifiers; models for Text Similarities, to associate the sentence with others; various Recommendation Models that find similar answers; and Machine Reading Comprehension, described as “a field in progress,” which searches for relevant answers to questions from a large set of partially-structured documents. Players include Watson Discover, Microsoft, and ew firms such as Recital, a natural language startup in Paris.

The orchestration of all these models makes the application appear to be smart. The availability of high quality learning data is often the number one challenge.

Let’s Automate the Coding Too

Given all the work tying together so many models and AI methods, it figures automation is entering that picture too.

Deep TabNine is a startup offering a coding autocompleter, using AI to help to automate the process of writing code. Programmers add it into their favorite editor, and as they write, it suggests how to continue each line, small chunks at a time, according to an account in TheVerge.

The tool was created by Jacob Jackson, as a computer science undergrad at the University of Waterloo. He started work on the original version in February 2018 and launched it last November. In July, 2019, he launched an updated version that used a deep learning, text-generation algorithm called GPT-2, designed by the research lab OpenAI. This has impressed coders.

User Franck Nijhof, an IT manager, has used other auto completion tools but sees Deep TabNine’s suggestions are more accurate and helpful. “TabNine is undoubtedly a game-changer,” he is quoted as saying.

The software works on a predictive basis, said Jackson, relying on the ability of machine learning to find statistical patterns in the data. Deep TabNine is trained on two million files from the GitHub code repository.

Post: Blog2_Post
bottom of page