Research Essay

Research Essay

Artificial Intelligence
Advancements in the field of technology over the past couple of decades have redefined what is seen as possible and impossible. Just in the past decade alone computers have connected people across the world, assisted in medical operations, and even carried out military operations. After these advancements, the common belief is that in the future every aspect of people’s lives will have some sort of technological influence. To an extent it has already come to that point. Almost everything people do involves a computer whether it be driving to work, catching up with friends on social media, even something as simple as making a cup of coffee. But this generation’s technology all still has one thing in common: it is controlled by humans. As of right now computers are unable to think and make decisions for themselves. However that may not be the case in the generations to come. Being developed right now is a technology called artificial intelligence where computers will soon be able to think for themselves and make decisions based on the information programmed into them. Now it may seem like humans are on the path to a world similar to the one seen in the popular Terminator movies but those fears are irrational. There are practical uses for artificial intelligence in the medical field, industrial setting, and the military. The need for humans to perform tasks like high stakes medical procedures where there is no margin for error, or monitor a factory 24/7 to make sure everything runs smoothly, would disappear. These practical uses for artificial intelligence are the reasons scientists should continue to develop the technology despite people’s fear of allowing computers to make decisions and learn.
Before even considering trusting technology with a human life, one must understand the process of how this technology makes decisions. The base for all decision making starts with the person who writes the code. The person must program a hierarchy of values which dictates the morals of the system (McEneaney). For example in a system relating to medical uses the programmer must tell the computer how to weigh survivability, success rate, and other factors before making its decision. After the base of “artificial morals” another key component of artificial intelligence is learning from experience. This is made possible by methods called probabilistic modeling and Bayesian optimization (Ghahramani). In layman’s terms probabilistic programming is simply when a computer goes through a list of possible outcomes to a situation and analyzes data it has recorded in the past. Bayesian optimization is the part that goes through all of the past situations and weighs the potential risks and benefits of each option and then chooses the right solution according to the hierarchy of morals it is programmed to obey. Unfortunately none of these programs can ever exist until data becomes more compressible and large amounts of data can be stored in small arrays. Luckily that last piece of the puzzle is also being developed. Simply called big data, this method for storing data is so much more efficient than current methods that it may be able to contain the complex datasets required to run artificially intelligent software (Otero, Peter). With all of these pieces put together the possibility of developing an infinitely intelligent robot that continues to learn from experiences but also follows moral guideline set in place by the programmer becomes very relevant. The possibilities would be endless for this technology especially in the medical field, industrial settings, or even the military.
One problem that can lead to disaster in the emergency medical services is called “human error”. Surgeons must perform their operations with incredible precision or else the patient could lose their life. Unfortunately, humans cannot be one hundred percent accurate all the time and occasionally this does cost people their lives. With this new technology, engineers could design robots that could perform the same procedure to a higher level of precision and have the ability to adapt to anything that may go wrong in a matter of nanoseconds (Prendergast, Winston, 177). This accuracy and adaptability would virtually eliminate casualties on the operating table.
Another field that would benefit if technology took a prominent role in is industrial settings. Workers that monitor industrial machinery such as factory equipment, power plants, or oil rigs have also been known to make mistakes that cause large consequences. Factories have been destroyed, nuclear power plants have had meltdowns, and oil rigs have spilled oil into the oceans. The cause for some of these catastrophic failures is the simple fact that the person operating the machinery was not paying attention. Intelligent computers would have never allowed any of those situations to happen because the computers would monitor everything 24/7 without needing rest and would not get distracted (Prendergast, Winston, 216).
The last application for artificial intelligence is the possibility of making intelligent robots capable of assisting in military operations. Robots and drones are already a reality in the military but as of right now they are still remote controlled by humans. If machines were to be able to think for themselves, they could help with jobs in the military such as building bases, transporting supplies, or extracting wounded soldiers. The risk that inherently comes with giving robots the ability to think and make decisions is that some countries may try to arm robots and program them with the ability to kill. This would give rise to all sorts of protests doubting the robot’s ability to make the decision to take a life. As a result of these protests there would inevitably have to be some sort of reform to the current rules of war addressing to what extents this technology could be used (Greenemeier, 45). But if the military used the technology for other uses like the ones listed before, there would likely be no resistance from society and those jobs could be done with much less manpower.
These concepts of intelligent machines becoming fully integrated into humans’ lives may seem farfetched to the many people but some cases of this technology have already emerged. Take for example Siri, Cortana, Google Now, Alexa, and self-driving cars. These are all examples of this sort of technology in its infant stage. Though they have not yet been fully adopted by everyone in society, they have already brought rise to controversial ethical issues. According to Mac Baker, a Computer & Information Science professor at the Virginia Military Institute, the main ethical problem involved in artificial intelligence is liability and responsibility. If this technology makes a mistake who is held accountable? The owner of the machine? The programmer? Or is anyone at fault? This is a complex issue that cannot be solved using previous methods of jurisdiction. The law cannot punish computers for making a mistake which leaves the issue unresolved. This is all a part of the development of this technology, when a problem arises it gets fixed so that it won’t happen again. The technology will continue to improve until it is nearly perfect and even more reliable than humans which is the ultimate goal. Until it is more reliable than humans though, the technology should not be mass produced or sold to the general public. This is up to the companies developing the technology to ensure the technology’s reliability and they could even be held accountable if the technology makes a major mistake.
To conclude, with technology evolving at the rate it is, we could very well see artificial intelligence become a reality. The software could be used to create robots and systems with morals and the ability to learn. Intelligent technology with morals and learning capabilities could revolutionize many industries such as the medical industry, the production industry, and the military by eliminating human error. As of right now people are wary of allowing technology to take that next step of becoming even more involved in their lives. The reality is that technology already plays a huge part in society so by introducing artificial intelligence people’s way of life would not change significantly. Therefore, mankind should embrace the inevitable and enjoy the benefits that come along with artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *