Artificial Intelligence is slowly making in-roads in our lives, making decisions for us autonomously, while we sit clueless hoping it makes things simpler and easier. However, a lot of people, including some industry veterans like Elon Musk, have argued that strict legislations have to be put in place before its too late and things get out control. Yet, most such pleas have fallen on deaf ears. But, a recent incident might be the trigger for the authorities to take notice and finally make formal legislations surrounding AI. Hong Kong real estate tycoon Samathur Li Kin-kan is sueing a company that offered to manage investments and grow wealth through AL-controlled automated trades. The Hong Kong-based businessman has alleged that the system has caused him losses worth millions of dollars.
According to a story by Bloomberg on this case, Li was approached by Raffaele Costa, CEO, and founder of Tyndaris Investments, in 2017, to tell about a company that uses a supercomputer called K1 and Artificial Intelligence to carry trades. The London-based Tyndaris Investment’s robot hedge fund is powered by technology developed by Austria-based AI company 42.cx, which combs through online sources like real-time news and social media to predict U.S. stock futures. Li was sold on the idea and apparently agreed on the investment firm to manage $2.5 billion, with the goal of eventually increasing that to $5 billion. The company started managing Li’s investment in late 2017, and Li says the AI-powered system was regularly losing money. On one particular day, the loss was in tune of $20 million, which prompted Li to take legal action. Li pulled out his money from the account and filed a $23 million lawsuit against Tyndaris, saying Costa overplayed K1’s capabilities. Tyndaris, on the other hand, is also suing Li for $3 million in unpaid fee and claimed it never guaranteed the AI strategy would make money. This legal battle has now become the first known example of a court case over financial losses caused by an AI-powered trading system. The important question that arises from this case is who is to be held responsible when AI screws up. Maybe this will set the precedent for such cases in the future.