Fear of AI Distracts from the Real Threat

Fear of AI Distracts from the Real Threat

Artificial intelligence is very much a part of our lives, from online assistants like SIRI to autonomous combat use vehicles. Mark Zuckerberg presents AI as beneficial tech, destined to cure diseases and filter huge quantities of information. Many others fear its potential for radical evolution and rogue algorithms. As with any emerging technology, it is unclear what the future holds for AI, but we know it has tremendous potential that is already being leveraged in industry, business and national security.

Yes, there’s a scary side to AI, the fear it could result in out-of-control machines that grow to outsmart humans. Think of the robot HAL 9000 in 2001: Space Odyssey, The Terminator movies or the end of the human workforce, leading to wide-scale unemployment. Ultimately AI’s impact as a positive force in society depends on the extent to which developers and lawmakers can find ways to limit its vulnerabilities. To support its use for good, a group of prominent tech titans including Peter Thiel, co-founder of PayPal, and Elon Musk, SpaceX founder, committed $1 billion to Open AI, a nonprofit company aimed at ensuring responsible AI development.

Consider examples of AI technology that have already proved valuable, contributing to current and future daily tasks:

  • empowering autonomous vehicles (drones, self-driving cars)
  • proving mathematical theorems
  • playing games (such as Chess, Go)
  • facilitating search engine results (Google, etc.)
  • providing spam filtering
  • detecting, evaluating and patching software vulnerabilities
  • predicting flight delays
  • predicting judicial decisions
  • targeting online ads
  • increasing traffic to websites, anticipating purchases and supporting sales

AI technology is particularly relevant to the Defense Department and national security. Former Defense Secretary Mark Esper identified AI as an essential technology that the U.S. must master to stay competitive on the world stage, noting its potential to transform every aspect of the battlefield from the back office to the front lines. At the Virtual Joint Artificial Intelligence Center Symposium (Sept. 9, 2020), Mr. Esper said that the country first to field AI will have enormous advantages over its competitors. He voiced his concern over China and Russia using AI in a bellicose way, referencing China’s use of artificial intelligence and technologies to repress Muslim minorities, journalists and pro-democracy protestors. In the U.S., the DoD continues to evaluate and develop AI technologies, looking to leverage them in ways that protect our citizens, troops and homeland security.[1]

The good news for those who fear artificial intelligence is that computers won’t be surpassing their human creators anytime soon due to the challenges of data preparation. In Ron Schmelzer’s article The Achilles’ Heel of AI, he reminded us of the classic adage, “Garbage In, Garbage Out:” If the models are bad, wrong or rife with error, then your results are trash. Machine learning algorithms require clean, accurate, well-labeled data to produce correct results, and data preparation work is surprisingly human intensive. Computers do not have creativity and they cannot think or feel.

According to a recent report from Cognilytica, an AI advisory firm, data preparation and engineering tasks account for 80% of the time consumed in most AI/machine learning projects. It is so hard for computers to make sense of unstructured data that it is estimated the market for AI and machine learning data preparation solutions will grow to $1.2 billion by the end of 2023. One tool that is helping AI to be more efficient is natural language processing (NLP), which can scrape unstructured data and make it useable. What that suggests is that by combining AI with NLP, you have more data and cleaner data. That still requires unique formulas and programs, and NLP is in its infancy.

While the DoD, private industry and the international community should seek to put policies in place to ensure AI is used responsibly, the technology depends on humans to exist. It is true that AI becomes more prominent every day, but the full reality is many limitations remain. It still requires substantial assistance, coding, and direction from humans, only as effective as the data it receives and significant human input. Both industry and government should be working together to use AI/ML/NLP to identify threats and increase security as well as workplace efficiency and data/information management on an enterprise level. We should fear the bad actors and enemies at our gates and firewalls, and recognize AI is not the threat but a tool with great potential.

Moe Jafari is CEO of Executive 1 Holding Company.

The original published article can be read at Federal News Network: https://federalnewsnetwork.com/commentary/2021/03/fear-of-ai-distracts-from-the-real-threat/

Related Posts