Research Essay

Research Essay

Artificial Intelligence
Advancements in the field of technology over the past couple of decades have redefined what is seen as possible and impossible. Just in the past decade alone computers have connected people across the world, assisted in medical operations, and even carried out military operations. After these advancements, the common belief is that in the future every aspect of people’s lives will have some sort of technological influence. To an extent it has already come to that point. Almost everything people do involves a computer whether it be driving to work, catching up with friends on social media, even something as simple as making a cup of coffee. But this generation’s technology all still has one thing in common: it is controlled by humans. As of right now computers are unable to think and make decisions for themselves. However that may not be the case in the generations to come. Being developed right now is a technology called artificial intelligence where computers will soon be able to think for themselves and make decisions based on the information programmed into them. Now it may seem like humans are on the path to a world similar to the one seen in the popular Terminator movies but those fears are irrational. There are practical uses for artificial intelligence in the medical field, industrial setting, and the military. The need for humans to perform tasks like high stakes medical procedures where there is no margin for error, or monitor a factory 24/7 to make sure everything runs smoothly, would disappear. These practical uses for artificial intelligence are the reasons scientists should continue to develop the technology despite people’s fear of allowing computers to make decisions and learn.
Before even considering trusting technology with a human life, one must understand the process of how this technology makes decisions. The base for all decision making starts with the person who writes the code. The person must program a hierarchy of values which dictates the morals of the system (McEneaney). For example in a system relating to medical uses the programmer must tell the computer how to weigh survivability, success rate, and other factors before making its decision. After the base of “artificial morals” another key component of artificial intelligence is learning from experience. This is made possible by methods called probabilistic modeling and Bayesian optimization (Ghahramani). In layman’s terms probabilistic programming is simply when a computer goes through a list of possible outcomes to a situation and analyzes data it has recorded in the past. Bayesian optimization is the part that goes through all of the past situations and weighs the potential risks and benefits of each option and then chooses the right solution according to the hierarchy of morals it is programmed to obey. Unfortunately none of these programs can ever exist until data becomes more compressible and large amounts of data can be stored in small arrays. Luckily that last piece of the puzzle is also being developed. Simply called big data, this method for storing data is so much more efficient than current methods that it may be able to contain the complex datasets required to run artificially intelligent software (Otero, Peter). With all of these pieces put together the possibility of developing an infinitely intelligent robot that continues to learn from experiences but also follows moral guideline set in place by the programmer becomes very relevant. The possibilities would be endless for this technology especially in the medical field, industrial settings, or even the military.
One problem that can lead to disaster in the emergency medical services is called “human error”. Surgeons must perform their operations with incredible precision or else the patient could lose their life. Unfortunately, humans cannot be one hundred percent accurate all the time and occasionally this does cost people their lives. With this new technology, engineers could design robots that could perform the same procedure to a higher level of precision and have the ability to adapt to anything that may go wrong in a matter of nanoseconds (Prendergast, Winston, 177). This accuracy and adaptability would virtually eliminate casualties on the operating table.
Another field that would benefit if technology took a prominent role in is industrial settings. Workers that monitor industrial machinery such as factory equipment, power plants, or oil rigs have also been known to make mistakes that cause large consequences. Factories have been destroyed, nuclear power plants have had meltdowns, and oil rigs have spilled oil into the oceans. The cause for some of these catastrophic failures is the simple fact that the person operating the machinery was not paying attention. Intelligent computers would have never allowed any of those situations to happen because the computers would monitor everything 24/7 without needing rest and would not get distracted (Prendergast, Winston, 216).
The last application for artificial intelligence is the possibility of making intelligent robots capable of assisting in military operations. Robots and drones are already a reality in the military but as of right now they are still remote controlled by humans. If machines were to be able to think for themselves, they could help with jobs in the military such as building bases, transporting supplies, or extracting wounded soldiers. The risk that inherently comes with giving robots the ability to think and make decisions is that some countries may try to arm robots and program them with the ability to kill. This would give rise to all sorts of protests doubting the robot’s ability to make the decision to take a life. As a result of these protests there would inevitably have to be some sort of reform to the current rules of war addressing to what extents this technology could be used (Greenemeier, 45). But if the military used the technology for other uses like the ones listed before, there would likely be no resistance from society and those jobs could be done with much less manpower.
These concepts of intelligent machines becoming fully integrated into humans’ lives may seem farfetched to the many people but some cases of this technology have already emerged. Take for example Siri, Cortana, Google Now, Alexa, and self-driving cars. These are all examples of this sort of technology in its infant stage. Though they have not yet been fully adopted by everyone in society, they have already brought rise to controversial ethical issues. According to Mac Baker, a Computer & Information Science professor at the Virginia Military Institute, the main ethical problem involved in artificial intelligence is liability and responsibility. If this technology makes a mistake who is held accountable? The owner of the machine? The programmer? Or is anyone at fault? This is a complex issue that cannot be solved using previous methods of jurisdiction. The law cannot punish computers for making a mistake which leaves the issue unresolved. This is all a part of the development of this technology, when a problem arises it gets fixed so that it won’t happen again. The technology will continue to improve until it is nearly perfect and even more reliable than humans which is the ultimate goal. Until it is more reliable than humans though, the technology should not be mass produced or sold to the general public. This is up to the companies developing the technology to ensure the technology’s reliability and they could even be held accountable if the technology makes a major mistake.
To conclude, with technology evolving at the rate it is, we could very well see artificial intelligence become a reality. The software could be used to create robots and systems with morals and the ability to learn. Intelligent technology with morals and learning capabilities could revolutionize many industries such as the medical industry, the production industry, and the military by eliminating human error. As of right now people are wary of allowing technology to take that next step of becoming even more involved in their lives. The reality is that technology already plays a huge part in society so by introducing artificial intelligence people’s way of life would not change significantly. Therefore, mankind should embrace the inevitable and enjoy the benefits that come along with artificial intelligence.

Argumentative Essay

Argumentative Essay

In the age of technology, the machines we have created have managed to influence human behavior, and sometimes even promote making bad decisions. A specific example which is quickly becoming more and more evident over the years is how video games, especially ones of violent nature, have impacted the behavior of kids. A lot of the time kids are not mature enough to completely separate the realities of what is happening on the screen and real life. This can give root to violent behavior which is why it is the responsibility of the parents to censor their children from such games. Many studies have been conducted and have supported the claim that video games negatively affect children. The only logical solution to this issue would be for parents to censor their children from violent video games.
Due to the complexity of the issue and the inability for companies to prevent their games from falling into children’s hands, the responsibility falls on the parents. Companies that produce video games do everything within their power to prevent children from being exposed to the violence. The only thing they can do from their end is place a rating on the game which dictates how old one has to be in order to purchase the game. That precaution is easily bypassed when parents buy a game for their children with little to no knowledge on how violent the game actually is. After prolonged exposure to the violence in video games, children can develop aggressive behavior which could even lead to violent actions. Video games portray violence as normal, appropriate, and in most cases successful (Violent Video Games and other Media Violence, Craig A. Anderson, PH.D). This causes children to see the world as a more hostile place and also hinders their ability to come up with nonviolent solutions to the problems they face. By frequently committing violence in a game, children are much more likely to react violently when faced with a conflict in real life. This is because they have learned to react from the video games where violence is normal and accepted. These tendencies to react violently to situations have shown to carry over into adulthood and affect people for the rest of their lives. (Life lessons: Children learn aggressive ways of thinking and behaving from violent video games, Craig A. Anderson, PH.D). It is a shame that this issue could be so simply resolved by parents taking the initiative and stop allowing their children to be exposed to the mature content of some games. The first and only step parents would need to take would be to research the games their kids try to convince them to buy before buying them. This simple way for parents to censor what games their kids play would, without a doubt, improve the behavior of their children and prevent them from possibly developing the aggressive behavior described earlier.
Despite the popular belief that violent video games can have negative effects on adolescents, there are some that argue contrary to the presented research. Some believe that video games play little to no part in the development of aggression in children. They argue that the strongest contributors to violent behavior are mental stability and quality of home life rather than virtual violence (Reality Bytes: Eight Myths about Video Games Debunked, Henry Jenkins). They also claim that the studies conducted have been vague and inconclusive stating that the sample sizes have been too small. The last point made by the people who oppose these studies claim that it is coincidental that violent youths also play video games because 90% of boys and 40% of girls play video games (Reality Bytes: Eight Myths about Video Games Debunked, Henry Jenkins). These are fair points made by the opposing side but there is an explanation to each of them.
Despite the truth in the fact that aggression can stem from mental stability and home life, children who are mentally stable and have healthy home lives are still at risk for developing violent habits from mature video games. A kid can have a perfect life at home and be completely mentally healthy but video games can plant violent thoughts into their mind and teach them that violence is acceptable. The second point discusses the recent studies being done around the country and how they believe the studies are insufficient sources to draw conclusions from. A fact that was probably overlooked by them is that studies have been ongoing for the past 50 years since the very first video games. The results from the past 50 years have shown consistent patterns that relate virtual violence to increased aggressive behavior in youth. Their last claim is that since 90% of boys and 40% of girls play video games it is impossible to draw the connection between aggressive behaviors in a select few children and video game violence. The fact of the matter is that video games cannot be ruled out for being a contributor that causes select children who may have other issues such being mentally unstable or living in a hostile environment to commit violence. If they are taught to be violent or learn that violence is acceptable they are more likely to react violently, plain and simple. With all things being considered, it is still clear that if the violence provided by video games was removed from children’s lives, they would be at less risk of developing aggressive behaviors.
Unfortunately, this problem will probably exist forever since technology is always evolving and will always be a part of our lives. With video games becoming more and more realistic, children are being wrapped up in a false reality where they learn that violence is acceptable. And since children especially have a hard time separating these realities from the real world, their behavior in game might translate over into real life. The only people with enough authority in children’s lives that could reduce the growth of this issue are the parents. If parents were to simply stop providing their children with video games containing mature content and censor what children are allowed to play, this issue would become less of an epidemic and the overall behavior of children across the world would improve.

Exploratory Essay

Exploratory

 

Joseph Lewers

ERH-102-06

Mrs. Smith

3-11-16

Artificial Intelligence

Advancements in the field of technology over the past couple of decades have redefined how we see our own future. Just in the past decades, we have developed computers capable of connecting people across the world, assisting in medical operations, and even carrying out military operations. After these advancements, the common belief is that in the future every aspect of our lives will have some sort of technological influence. To an extent, our generation has already come to that point. Almost everything we do involves a computer, whether it be driving to work, catching up with your friends on social media, even something as simple as making a cup of coffee, but our generation’s technology all still has one thing in common; it is controlled by humans. As of right now, computers are unable to think and make decisions for themselves. However, that may not be the case in the generations to come. Being developed right now is a technology called artificial intelligence where computers will soon be able to think for themselves and make decisions based on the information programmed into them. On a personal note, this new up-and-coming technology has a lot of relevance to me individually since I am pursuing two fields which are potential consumers of the technology. At the moment, I am studying mechanical engineering which, if this technology becomes a reality, will be a part of my everyday life as an engineer. I also plan on commissioning in the Navy, which is known for having the newest technology and could possibly try to weaponize it. Now it may seem like humans are on the path to a world similar to the one seen in the popular Terminator movies, but I would argue that those fears are irrational. There are practical uses for artificial intelligence in the medical field, industrial business, and education. But is this technology realistic and reliable enough to perform these jobs? And as silly as it sounds, should society trust the technology to not become self-aware and not obey what it is programmed to do?

Before even considering trusting technology with a human life, I had to learn the process of how this technology works. I read a number of articles to try and piece together the inner-workings that go into artificial intelligence to determine its trustworthiness. In the article Simulation-Based Evaluation of Learning Sequences for Instructional Technologies written by John E. McEneaney, he explains the base for all computed decision making starts with the person who writes the code. The person must program a hierarchy of values which dictates the morals of the system. For example in a system relating to medical uses, the programmer must tell the computer how to weigh survivability, success rate, and other factors before making its decision. Next I read the article Probabilistic Machine Learning and Artificial Intelligence by Zoubin Ghahramani which tells about another key component of artificial intelligence where computers could learn from experience. This is made possible by methods called probabilistic modeling and Bayesian optimization. In layman’s terms, probabilistic programming is simply when a computer goes through a list of possible outcomes to a situation and analyzes data it has recorded in the past. Bayesian optimization is the part that goes through all of the past situations and weighs the potential risks and benefits of each option according to the hierarchy of morals programmed into it and then chooses the best option. With these pieces put together, we could very well see the development of infinitely intelligent computers that continuously analyze situations and make decisions based on the moral guideline set in place by the programmer.

To me, just by hearing the concrete facts about what is being done to develop the technology, my question of whether this technology is realistic is answered. Since there is clear progress being made towards developing the software that would be used in developing artificial intelligence, we could see this technology being fully functional in the near future. These articles also reinforced my belief that there is nothing to be wary of if computers do become artificially intelligent. One fear that lurks in the back of society’s mind is if computers can think for themselves they might learn not obey commands they are given. These articles put those fears to rest by clearly stating how the computers have to go through a whole operation before making its decision and that the operation can only be set and altered by the programmer. Those are my two takeaways from the first two articles but they did not really touch on why the technology would be practical, or its reliability so I had to keep reading.

Then I came across the book The AI Business: Commercial Uses of Artificial Intelligence by P.H Winston and K.A Prendergast. The book goes into depth about the commercial uses for artificial intelligence in medicine, industry, and other electronics such as robots. The authors claim that that artificial intelligence will be paired with advance robotic technology to help in the medical and industrial fields (177). They predict robots could be used to assist or even perform medical procedures more precisely than humans which could significantly increase the survivability rate of high stakes procedures (179). They also see the possibility of robots being used in industrial settings. Robots and other intelligent systems could run 24/7 monitoring machinery and if something goes wrong, the system could diagnose and repair the problem (216). These are very relevant applications of artificial intelligence and when the technology does become a reality I believe these two fields will be some of the first fields it will be applied to.

Personally I was already aware of the practical applications of artificial intelligence from prior interest in the subject, but I do believe this book could have opened some people’s eyes to what this technology could be used for. If robots could assist with or perform high stake medical procedures flawlessly I think that would persuade patients of the technology’s practicality by eliminating human error which could save lives. Next, if robots or artificially intelligent monitoring systems were introduced to industries, factories could run 24/7 without having to hire workers to monitor the production. That would cut down the costs of running factories and essentially double production which without a doubt would persuade large companies of AI’s potential. If production rates of factories double it would not only be beneficial for that company, but the entire country’s GDP which would help our economy immensely and would most likely silence all doubts of about artificial intelligence’s practical usage.

Before doing much research I had an opinion that artificially intelligent technology is the inevitable future of society so I may as well accept it. After researching the subject I have become more enthusiastic about this upcoming technology because of all the ways it could benefit society. I had a couple of ideas of how AI could be used in electronics or even the classroom but I found a book that told of possible uses in the medical field and industry which I think would be more useful to society as a whole. So to answer my question from earlier about whether the technology is realistic and reliable to perform complex tasks: yes, the technology is in its infant stage right now but will eventually be perfected to where it will be more reliable than humans. Then to answer my other question about whether society should trust computers to think for themselves: I’d respond by saying computers can only do what they are programmed to do and cannot alter their own code. After suppressing the doubts I had in the back of my head about how realistic and reliable artificial intelligence is, and researching the systematic process behind computers’ thought processes to determine if I would trust the technology, I have drawn my conclusion. This technology is in fact inevitably going to be a part of my life in the near future and I think it will benefit me in some way. The benefits this technology would bring are the reasons I am enthusiastic for its arrival and I think everyone should be just as excited.

 

 

 

 

 

Work Cited

Ghahramani, Zoubin. “Probabilistic Machine Learning and Artificial Intelligence.” Nature. Nature, 27

May 2015. Web. 25 Feb. 2016.

McEneaney, John E. “Simulation-Based Evaluation of Learning Sequences for Instructional

Technologies.” Springer Link. Instructional Netherlands, 20 Jan. 2016. Web. 25 Feb. 2016.

 

Winston, P. H., and K. A. Prendergast. The AI Business: Commercial Uses of Artificial Intelligence.

Cambridge: MIT, 1986. Print.

Reflective Essay

Reflective Tag:

Download Link: Reflective Essay

Throughout my academic career writing has been my weakest subject for a variety of different reasons. Even after taking ERH-101 here at VMI I don’t feel like I’ve developed any new skills regarding what to address in my writing which along with other things worry me coming into ERH-102. My concerns entering the next level of writing course stem from the fact that I’m not any stronger of a writer than I was in high school. This is due to my lack of interest in the subject and my ERH-101 teacher being impersonal and not very helpful in teaching us writing strategies. Since I learned little to nothing about what to address in my writing last semester I’m a little afraid of what is to come in ERH-102.

As mentioned, I didn’t really learn anything new about addressing specific information in my essays throughout English Writing and Rhetoric 101. The only thing I specifically addressed in level one English papers was different points made by authors of articles we read. Last semester we wrote one paper where we had to address the opinions of authors and use their opinions to support our own theses. Other than that I guess attempted to address points and topics that I knew my teacher would be interested in. I would briefly mentioned a topic that I knew would catch his attention based on what he told us about himself at the beginning of the semester. These were simple attempts at making my essay a little more enjoyable for him to read and hopefully have a positive impact on my grade. However, I learned these two strategies in high school leaving me with the reality that I didn’t learn much in ERH-101. This is my greatest concern now that I’m moving into a higher level English course where expectations are higher yet my skills remain unchanged since high school.

The demand for well-developed essays with little to no room for error scares me coming into ERH-102 because I don’t feel that I’ve developed college level writing skills yet. My English class last semester didn’t teach me any new strategies as a writer and my teacher gave minimal advice on how I could improve. Therefore, I still have a hard time with things such as the writing process and interpreting feedback. All through high school my teachers have all tried to teach me the best way to develop a paper. They taught about the process of laying out ideas, creating an outline, drafting, and revising. In the past I’ve procrastinated or just not had enough time to go through the whole process of gathering my thoughts or creating an outline so I would just skip straight to the drafting phase. That was a bad habit I was hoping to break last semester but I wasn’t able to since the ratline and work from other classes took up a lot of my time. Besides my approach to writing papers, another thing that has me worried moving into English 102 is that I have a hard time revising my papers. Since I usually had to meet some sort of deadline I’d try to make my first draft as good as possible to avoid as much of the revising phase as I could. This gave rise to two problems last semester in particular: first being how inefficient that strategy was, second being that I wouldn’t get very much constructive feedback from peer reviews. I would submit a rough draft to my peers to be revised but I’d rarely get any feedback on how I could improve my paper as a whole. They would return my paper to me with a bunch of highlighted sentences that contained grammatical mistakes or missing punctuation which I usually caught myself anyway. My point is that peer reviews were an insufficient strategy for revising my papers and I wish I could have gotten some sort of feedback from my teacher before handing in my final draft. The last thing that has me worried about English 102 that I forgot to mention earlier is the work load. This semester I’m taking 18.5 credit hours and from what I’ve seen so far, four of those classes are going to be very time consuming outside of the classroom. This has me worried about having enough time to produce quality work for this class at least earning myself a B.

To conclude, my experience so far in college writing courses has been mediocre for teaching me ways to become a better writer which along with a couple other things has me worried about moving into this higher level course. My first exposure to a college writing course consisted of: coming to class, doing some assignment related to our paper, then leaving with little to no interaction with the professor. Some students liked this style of teaching but I was not one of them. I prefer a class where the teacher interacts with the students and give personal feedback on how to improve their writing. The lack of development last semester paired with increasing expectations are the main things that scare me about ERH-102. Even though I may be faced with these challenges I feel that with a little bit of hard work and dedication I should be able to achieve my goal of receiving a B in this class.