Artificial intelligence (AI) is software that can analyse and learn from patterns using information it is fed, such as language, images, audio, online behaviour and more.
It then computes and predicts outcomes that enable us to perform tasks that we could only dream about decades ago. Most times, there is some degree of autonomy involved. In our handhelds and home gadgets, we now have AI incorporated into most of the camera modules in smartphones to enable subject/object recognition, provide the ideal focus, and colour correction algorithms to the images taken even in low light conditions.
Substantial advances in language processing, computer vision and pattern recognition mean that AI is now integral in people’s lives on a daily basis.
In Uganda, the applications of AI are limitless: In government, AI could be used to predict the impact of policies on different segments of the targeted population. The ministry of Health, for example, could analyse the huge amount of health data it collects from all the health centres countrywide and look for patterns and trends in this data. This would make it quite easy to identify high-risk communities and facilitate early intervention programs.
In the private sector, many companies are already using AI in their customer care service - a quick, cheap and efficient way to screen customer queries. If you have ever used the chat feature on a company’s website or app, you most likely first interacted with a chatbot.
In our highly dynamic and often controversial finance credit and risk sector, AI can predict borrower behaviour; how likely a client is to default on a loan and how much to lend to them, among others. In agriculture, our main economic activity, AI can inform real-time crop insights like maximum yield detection using analysis of our weather patterns and soil conditions and with the help of sensors and GPS modules embedded in robots and drones.
This would mean more accurate planting/sowing times, irrigation coverage, weed detection, spraying and eventual harvesting. In the healthcare space, we already have providers using AI in diagnosis and screening of patients, allocation of available doctors, performance appraisals and cost-effective recommendations of medicines for resupply.
In our popular sports here, the local clubs could use embedded sensors to capture specific player performance metrics, compute that against the player’s workout sessions, age, incorporate this into their nutrition needs and then predict the optimum performance out of that player.
Players could also use such data to justify their compensation during contract negotiations. The technical bench, on the other hand, can use the same information in optimal selection of teams and strategy. In law enforcement, AI is integral to anti-crime systems for facial, voice, number plate recognition and tracking of persons of interest.
It can be deployed in forensic analysis of crime scenes and scoring of perpetrator behaviours in pre-emptive crime management. Despite all this, AI systems carry their own inherent risks, namely: Lack of transparency in these AI systems because some incorporate proprietary code sets; others are complex and extremely difficult to interpret, leading to an opacity in their resultant decision-making process.
They are generally hard to audit. AI systems may also unknowingly perpetuate or amplify societal biases due to the biased training data used. It is crucial to invest in the development of unbiased algorithms and diverse training data sets to avoid this. AI systems collect and analyze huge amounts of personal data, raising issues related to data protection, privacy and security.
Strict data protection and privacy regulations must be adhered to and compliance regularly audited. Decision-making contexts present significant and far-reaching consequences as seen in self-driving cars. This is a considerable challenge in the design of AI systems. Developers must identify and prioritize the ethical implications of AI technologies to avoid that.
Cyber security risks associated with AI systems use and the potential for misuse have already skyrocketed. Hackers and other nefarious actors are now harnessing the power of AI to change the threat landscape and develop attacks that are more sophisticated.
Aware of these risks, the government constituted a National Task Force to advise on domesticating such technological advances. The 23-person task force, which comprises engineers, scientists, policymakers and academics, also advises government on disruptive technologies that can be avoided in the future.
Additionally, thre is regulation relevant to AI includes; Data Protection and Privacy Act, 2019; Electronic Transactions Act, 2011; The Computer Misuse Act, 2011. With this foundation, Uganda ought to embrace AI to achieve its development objectives.
The writer is an IT practitioner.