Ahead of our AI Enabled Tech Foresight Summit, we share some food for thought:
1) Is “Everything AI” reliable?
AI’s increasingly important role in modern days is unquestionable. Its impact has been growing and revolutionizing our daily life in many ways. But sometimes the behavior of AI can take everyone by surprise, including its own creators. A recent example of this happened during the opening event of Scientifica 2019, where an AI trained drone did something completely unexpected. While producing a perfectly correct end result, the way of performing the task was totally unpredicted. Perhaps this was a more efficient way of carrying out the actual task but this certainly opens up the question to what extent we can rely on unpredictable behavior of AI in more critical use cases where the way the task is carried out is as crucial as the end result. Read more here.
2) Is AI Ethical?
In recent years, AI has been supporting decision making processes in government organizations and agencies. From police departments to courts, local authorities increasingly rely on judgments by AI. Application of AI based decision making is even more concerning in the sensitive areas of criminal justice and welfare. The risk of producing an AI application that reinforces societal biases has prompted calls for greater transparency with regards to algorithmic or machine learning decision processes. A recent case where the use of AI based algorithms messed up the assessment of the student grades in the absence of examinations during the pandemic in the U.K. demonstrates the issue. The algorithms significantly downgraded student grades in the state schools from low-income areas, leaving many thousands of students seeking justice. This opens up the question of how we can avoid feeding AI with the societal biases & mistakes that we as humans make. Read more here.
3) How can we improve applicability, reliability and scalability of AI?
The above two cases and many more are good examples of why the adoption and application of AI is still in its infancy in many public and private fields. To achieve reliable, ethical and scalable AI applications, training of AI with high quality data is essential. Oftentimes in the training of AI algorithms, quantity over quality has been the case. Using easily accessible and cheap data sets in training contributed to the lack of stability and performance of AI applications in many use cases. While there is still a long way to go in training AI with fully unbiased, high quality datasets, healthcare company Presagen has developed a novel technique that helps to automatically clean poor-quality data – even intentionally provided ones – in the training of AI algorithms. The company has recently developed a range of patent-pending AI technologies for real-world problems that apply beyond healthcare. Read more here.
Further, latest developments include hybrid cognitive systems that combine model-based methods with machine learning as outlined by the international Association for the Advancement of Artifical Intelligence (AAAI). Essentially, these systems combine model-based algorithms with machine learning, leading to more stable AI algorithms. Read more here and here.