Still in its Infancy, AI Can Do Good or Bad or Both
By Andy Marken
We use it – heck, can’t not use it because it’s everywhere – but we’re not huge fans of AI (artificial intelligence).
We just don’t to get much satisfaction out of an answer just being handed to us.
The best things we’ve learned is when we had to do it ourselves…think, sweat, swear, squirm, try, stumble and suddenly BAM! solved.
Actually, we prefer it when you go through all of that trial and error and we learn from our mistakes. Personal struggle to a defined end is better and often it wasn’t even the answer we thought we were aiming for but turns out it’s even better.
Even though Amazon is eyebrow deep in developing/refining/using AI, Bezos admitted in his recent shareholder letter that the company would make a number of costly mistakes going forward because that’s what people do to succeed.
Unfettered, AI is going to lead to disasters … bet on it.
Don’t believe us? Just take a look though filmdom – 2001: A Space Odyssey, I Robot, Morgan, Tau, Judge Dredd, Westworld, Minority Report.
Use of AI doesn’t end well.
We know what you’re going to say, “Yeah sure, but those are only movies.”
All too often, the real world brings to life what creative minds develop. No, we don’t believe zombies and flying dragons will appear on the horizon…at least we hope not.
The problem with AI is that it can do anything developers (and their bosses) want it to do.
For Amazon, it continually analyzes what you bought/looked at and constantly suggests other neat ideas for you to buy/view.
For Netflix it analyzes what you’ve watched, how much of it you watched, when/where you watched it and makes “helpful” suggestions on stuff it really thinks you want to/should watch.
Every company has a “major” project underway to harness AI for the customer (and their bottom line).
Every country has a major initiative underway to ensure they’re ahead of every other country to implement leading-edge AI for their good.
Every HR department is scrounging to hire the best/sharpest AI developers to fill the slots for those companies/countries even as educational institutions and researchers say there just aren’t enough really knowledgeable people to develop/refine/implement it and won’t be for a llllooonnnggg time!
The big issue is that corporate/governmental executives are rushing forward without clear guidelines or insights on what it will do and how it will do it.
As Steve Jobs said years ago, “People don’t know what they want until you show it to them. That’s why I never rely on market research. Our task is to read things that are not yet on the page.”
Bezos said much the same thing in his recent shareholders letter, “We could not foresee with certainty what those programs would eventually look like, let alone whether they would succeed, but they were pushed forward with intuition and heart, and nourished with optimism.”
People have this vision that AI is the technology that’s going to boost productivity; solve our skilled labor shortage; make life easier/safer at home/work, on the road, in the air; everything.
Or, it’s the thing that’s going to steal jobs or worse.
AI is a cluster of different technologies, machine learning, natural language processing and other technologies that work with the cloud (systems/storage somewhere), data (lots, lots of data), analytics, processors, GPUs (graphical processor units), sensors, devices and things to ultimately do just about anything it’s programmed to do.
AI involves the ability of machines to emulate human thinking, reasoning and decision-making.
AI isn’t new but garnered excitement when Google’s AlphaGo triumphed over Ke Jie, the world master of the ancient game of Chinese Go.
Two months after the defeat, China’s president, Xi Jinping, laid out the country’s Next Generation Artificial Development Plan, part of the country’s plan for the future including AI, big data and Internet as core technologies.
Since then, they have committed to build a $150B AI industry by 2030.
In an article in The Atlantic, Henry Kissinger warned that AI was moving so quickly it could soon diminish human intelligence and creativity. But AI is used all of the time and is an integral part of people’s daily lives.
When you access Netflix, HBO, Hulu or other streaming service and download a movie, part of the decision-making process determines what compression, codec should be used and other technical aspects outlined by the content owner, streaming service and the screen you’re using at the time.
In today’s always-on environment, power comes from controlling data, making sense of it, and using it to influence how people think, act, respond, behave.
With the growth of mobile networks and devices access and use of the data has only increased. And will continue to grow.
It’s difficult for people to comprehend, much less control, all the information collected about them.
And it will increase as we move into the era of AI.
But it doesn’t always work out as planned.
According to Gartner, over the next 3-4 years 85 percent of AI projects will give us the wrong information, recommendation, solution because of bias in the data, algorithms and people managing them.
It’s not planned but people program and teach AI systems, so the systems learn from their biases (check the movies mentioned earlier).
Because most extensive, robust AI systems are cloud-based, they are available to more people for more applications. However, the lack of solid AI skills hasn’t kept pace with demand so more mistakes will occur.
And it’s going to continue for quite a while.
Look at Google.
They put together a number of seemingly independent AI Ethics committees in the U.S., Europe and Asia only to disband them because even the ethicists couldn’t agree on what information was needed, what they would do with it once they got it and how the ethics of ethics could ultimately be implemented.
In truth, biased AI systems are more the norm than the exception and the key is to recognize that the biases exist, or may exist, and take steps to limit the damage.
Organizations and people start with the best of intentions and then they get sidetracked because of the real world. Think about it:
Twitter was, in one executive’s words, “the free speech wing of the free speech party.”
Facebook began with the idea of making the world more open and connected.
Google wanted to organize the world’s information and make it accessible to all.
As St. Bernard of Clairvaux supposedly said, “Hell is full of good intentions or desires.”
Some countries and the Borg would like you to believe “resistance is futile” when they intend to assimilate people into their collective.
While EU and democratic governments around the globe are placing a growing list of rules/regulations (and fines) around the capture, processing, storage and use of personal data, authoritarian China is implementing AI with local firms – Baidu, Alibaba, TenCent, iFlyTek. These firms funnel data to China’s Police Cloud System that monitors seven categories of people, including those who “undermine stability.”
The tech companies have created an ecosystem around a flow of data that could take advantage of the AI boom, collecting user data on payments, interests and messages.
The goal is to build a system that gives every citizen and company a social credit score.
Connected devices and cameras everywhere are pivotal to the system’s success.
Last year, there were about 713M smartphone users in China (about 2.5B WW).
In today’s digital world, the supercomputer in your pocket may be just a phone/communications/connection device to you but it’s really a darn good data capture, homing device.
It tracks your every “like,” keeps track of folks you talk to, things you buy, stuff you read/watch, where you go.
And when it gets bumped up to a 5G network, it will be able to do even more … faster.
China has made a major push to apply facial recognition to policing and surveillance, with an estimated 200M surveillance cameras nationwide using facial recognition, smartglasses and other technology.
The facial recognition technology is being used across the country’s economy – securities, finance, payments, etc.
While many have raised concerns about how China is developing and deploying AI in terms of potential human rights abuse and threatening the future of technology, people tend to overlook:
That the average Londoner is captured on camera 300 times a day
Americans are caught on camera at least 75 times a day
Video doorbells like Ring, home/business security cameras and security services monitor areas 24×7
In today’s digital world, power comes from controlling data, making sense of it and using it to influence just about everything.
China is, as The Economist first put it, the Saudi Arabia of data.
Data privacy protections are on the rise in China, but still weaker than those in the US and much weaker than those in Europe.
As a result, data aggregators have a freer hand in what they can do with what they collect.
AI technology experts recently observed that they have the largest data pool; but without logistics, they will drown in it.
The U.S., Canada, France, Israel, England and other democratic countries have the structural advantage for research because the top universities attract some of the best, brightest Chinese researchers who end up working in the respective country.
According to last year’s Diffbot report, five of the top 10 global machine learning talent-producing universities are in China; but 62 percent of the graduates left to study/work in a more transparent, open environment.
Diffbot estimated that there are about 720,000 people skilled in machine learning across the globe, with nearly 70 percent working in the Americas, U.K., India and Israel.
Most of these men and women prefer to work in educational research labs or businesses – despite their periodic missteps, rather than have their work funneled to government agencies.
As Dr. Forbin said, “I think your mother was right. I think Frankenstein ought to be required-reading for all scientists.”