With the technological hype of AI in full swing society has now reached an inflection point and as such governments, business and organizations really need to be thinking about carving out executive levels of responsibility for AI in and machine learning technologies, both in collective and responsive way. This doesn’t mean that every organization needs to go out of their way to name a new chief AI officer CA.I.O. However, what if your a CEO of a C-level member of a business you should have some metric for measuring accountability of AI and machine learning at a senior management level. There are several general key values a CA.I.O and all companies looking to get involved in Ai and machine learning.
Now this isn’t just machine learning and technical talent for the development of AI. Approaching this strategy will only get a business so far. Instead you must expand our definition of AI talent to include to thinking about the how AI can be used by your workforce as well as the impact of AI technologies will have on your business more broadly. You don’t want to roll out machine learning technologies and have staff worrying about job security. So have a look as to the types of skills your employees will need to cultivate in order to thrive and flourish amongst the era of technological upheaval that AI promises to bring it may be ideal to nominate a champion to spearhead the acquisition of the tacit skills your employees will need.
A lot of data in most organization is siloed. If you want AI run across your entire business as smoothly as possible a unifying view of data governance across the organization is necessary. To accomplish this your CAO needs to pull these data sources together and govern them accordingly. Now this doesn’t mean keep all your data in one place, there is a reason why banks get robbed, they keep all the money in one place making them a target. To some data is the new gold and to other data is oil regardless your data is your source of innovation and it’s what will set you apart from other competitors s knowing how encourage the flow of data is important. Finally it’s important to keep in mind that data beget more data and as such as your business moves along in it’s digital journey data storage can be come a every big issue can how you handle your growing data will determine the success of your business long term.
Responsible AI is the third broad category your organization needs to make sure they’re ready for. One only needs to read the news around some of the big tech companies to understand where they go wrong. As such it’s paramount that companies need to start thinking more about really testing their products thoroughly before shipping out to the market. It’s also important for your business to invest limited randomized pilot studies where you can test the outcomes and durability of your AI embedded technologies before risking pubic embarrassment of even worse. Here are some values to help shape your ethics and principles in AI governance
Closely related to responsibility business need to think about where it’s going to be comfortable drawing the line between machines being accountable for things (expert systems). Where humans need to be in the loop. just because an automated process spits out an answer doesn’t mean it’s correct. There are very few things you want to trust to machines, so you really need to think about the human level accountability.
Which gets into explainable as well. When is it okay for a machine to decide something and you have not to know? As humans we are black boxes even to our selves but evolution has provided us with social cues that enable us to trust and interact with each other. To establish trust between in AI and your customers your business needs to follow rules and regulations and best practices are very important. In addition collectively we must pursue methods and techniques which help guide the principle of the AI technology as to you the machine has reached a conclusion. A paper published in 2016 by Ribeiro, et al; from the University of Washington titled Why Should I Trust You: Explaining the Predictions of Any Classifier explores such methods.
There’s a big issue with data being biased. Failure to screen your data for bias only increases the odds that your data contains unwanted biases. In today’s society biased data will put a companies reputation at as well as its consumers at risk. That’s unacceptable. We need to make sure that fairness is enforced. I discuss this to some detail my post: Building trust in AI
It wouldn’t too difficult to build AI systems that can take advantage of grey areas and loopholes in law and legislation. Instead we should be designing our AI systems to be honest and follow the rules that set in and for business society instead of gaming the system. Let’s say example your company is building self-driving car or perhaps a car with AI sensing technology embedded inside. Such as car should follow the speed limit. However it’s not impossible consider an AI embedded technology that can sense where the traffic zones speed cameras and police are; and then from that data and game the system in order to drive fast and recklessly without getting caught. There will of course be a market for such a device and it will take a while for governments to take wind of it. However it’s important to consider that just because something can be build doesn’t mean it should be.
These are pretty good starting point values for any Chief AI officer to consider no matter what filed they stem from.
hope this helps…