B.I.D

AI- A technology that can go rogue

We have all seen terminator and know what Skynet was capable of. We may have imagined a world with such technology but what if it became a reality. Today AI is a burning topic and everyone is working to implement it in one way or the other. Many experts have raised their concern about how we will handle this if any such thing go rogue. Elon musk was one of them who expressed his concern just before the Facebook bots program shutdown.

Facebook was trying to develop bots for its platform. But the bots started communicating with each other in a language that was developed by them and cannot be understood by humans. After this Facebook closed the program for the time being. Before we start what are the ways in which an AI can go rogue, we need to understand what is AI and how it works.

What is Artificial Intelligence?

AI is a software which can change with time without any supervision from humans and evolve over the time in response to stimuli. This gives us a system which was not coded by humans.

AI is a product of 2 factors-

  • A core Algorithm written by human
  • Training data which informs how algorithm will modifies itself to improve or adapt its performance.

Training data is provided by the developer in controlled environment, pre-deployment and dictates how the AI will perform after its launch to the public. This means that post-deployment it will change according to the data given to it. This means that several copies of pre-deployment can be completely different from one another for different users and organization.

What is AI malware and how it can be done?

Malicious AI can be AI which was initially not designed to do any harm but can become malicious without consent or intent of the developer.  

Possible ways of AI malware-

  1. Designed to be evil AI- Artificial intelligence which is designed to be malicious pre-deployment. Armies of many countries are openly working on AI based weapon systems. Criminals are also working on AI to use it in phishing attacks.
  2. Redesigned to be evil- This type of AI was not designed to be evil but was redirected to be and not reprogrammed or hacked.
  3. Poorly Designed AI- This type of AI is poorly designed and become malicious pre-deployment accidently and correct performance is accidental.
  4. Poorly managed AI- This AI can become malicious post deployment because of user errors. This is different from redesigned because in this AI become malicious as a result of user input and not because of intention.
  5. Model-corrupt AI- This become malicious because of the environment pre-deployment. In simple words it is “Monkey see, Monkey do”. Low grade AI can learn from humans and can mimic their counterproductive parts.
  6. Code corrupt AI- This is post deployment corruption of the AI by environment factors. In simple words it means physical corruption of the hardware, where AI is stored and managed.
  7. Over evolved AI- It become corrupted without the intend of the user or the developer over time.

The market size of AI is expected to of $ 190 billion by 2025 and with a CAGR of 36.62%. With such a big market size and no regulation or any solution or any detection mechanism to find out malicious AI. In past a lot of companies tried and failed to develop efficient AI because of the above-mentioned issues. A solution to this problem can help company save money and be ready if anything goes wrong.

People have ideas but not the will to do it. The world needs ideas to solve its problems. If you have idea to solve a problem than believe in it and pursue it. Hurdles are part of journey but you cannot reach the end directly. We will help you in crossing these hurdles. Come and be a part of a world where people believe in their idea. A world where people are working to implement their ideas.To Join us please visit: www.botsnbrains.com


Tags

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close