Engineer.ai CEO Sachin Dev Duggal, an Indian start-up claiming to have built an app development platform assisted by artificial intelligence, does not use AI to literally build apps, according to a report from The Wall Street Journal. Instead, the company, which has raised nearly $ 30 million in funding from a company in SoftBank and others, allegedly relies primarily on human engineers, while the hype around AI uses to attract customers and investments that last until it actually have its automation platform off the ground.
The company claims that its AI tools are supported by people and that it provides a service that helps a customer create more than 80 percent of a mobile app in about an hour, according to claims Engineer.ai founder Sachin Dev Duggal, who also says his other title Chief Wizard is been on stage last year. however, the WSJ reports that Engineer.ai does not use AI to compile the code and instead uses human engineers in India and elsewhere to compile the app.
The company was sued earlier this year by CEO, Robert Holdheim, who claims that the company is exaggerating its AI capabilities to get the funding it needs to actually work on the technology. According to Holdheim, Duggal told “investors that Engineer.ai had finished 80% of developing a product that it had hardly begun to develop.”
“Engineer.ai mainly uses conventional software and human engineers to create apps”
When the company was put under pressure how the company actually uses machine learning and other AI training techniques, it told the WSJ it uses natural language processing to estimate prices and timelines of requested functions, and relies on a “decision tree” to assign tasks to technicians. Neither of them really qualifies as the type of modern AI that allows advanced machine translation or image recognition, and it seems that no kind of AI agent or software of any kind compiles code. Engineer.ai did not immediately respond to a request for comment.
Engineer.ai is not the only one who would have told his AI capabilities. According to PitchBook, financing for AI startups is growing fast and reaching $ 31 billion last year, and companies such as the Japanese conglomerate SoftBank have promised to invest hundreds of billions in AI in the coming years. The number of companies that comprise the .ai top-level domain of British territory Anguilla has doubled in recent years, WSJ reports. In other words, saying that your company builds traditional technology, such as an app development platform, but throwing in AI is an easy way to get funding and attention in a saturated startup landscape that is increasingly under pressure from the efforts of giants such as Facebook, Google, Uber, and others.
According to the British investment company MMC Ventures, startups with a certain type of AI component can raise as much as 50 percent more money than other software companies, and the company says WSJ that it suspects that 40 percent or even more of those companies do not use real AI at all. Part of the problem is that AI can easily get off the ground in a test or provisional format, but it is much harder to actually deploy to scale. In addition, obtaining the necessary training data to build competent AI agents can be extremely expensive and time-consuming; Companies such as Facebook and Google have huge research organizations that pay engineers top salaries for developing better AI training techniques that can one day be used to build commercial products.
The revelations about Engineer.ai also reveal an uncomfortable truth about many modern AI: it hardly exists. Like the mitigation efforts of large-scale technology platforms such as Facebook and YouTube – which use a number of AI, but also primarily armies of contractors abroad and domestically to assess harmful and violent content for removal – many AI technologies require people to view them accompany. “Many startups use AI to build hype without actually using the technology”
The software needs to be trained to improve and corrected when things go wrong, and that requires human eyes and ears to view, annotate and restore data in the system where technicians can use it to refine algorithms. This was especially true for the short-lived chatbot tree from a few years ago, when big names like Facebook and startups such as Magic began hiring numerous contractors who were hidden behind AI agents, such as the terminated M of Facebook, who would take charge (or talk all the time) when conversations became too complicated.
But the mystification of AI, and the ability to make both the public and even investors believe that a technology is more advanced than it really is, has since expanded to entire companies and sectors.
Just look at the recent controversies about digital assistants and human contractors hired to view the audio exchanges that these assistants collect. Each of the Big Five has admitted that they use human workers to review these audio clips to improve the performance of the assistants over time. This also applies to Apple, which has stopped the practice and plans to offer an opt-out option after realizing that it could undermine its promise of user privacy. (Google has stopped practice in the EU for its assistant, but it is still doing so in the US and elsewhere, as is Amazon for Alexa and Microsoft for Cortana and Skype.)
But the point remains: people are obliged to help improve AI even when companies are reluctant to admit it and are not always transparent with customers when another person is actually involved in the process. In this case, a whole class of new startups seem to use AI hype to try to build new technologies that they might not be able to – or not even intended – both because it might be too difficult and because it is easy to pretend otherwise. And these companies get more money for it.