Matthew Eargle, Gregg Hamilton, and Mary Morgan
Middle Georgia College
Technology is rapidly improving throughout every aspect of society. While some of these advancements are mundane, some walk the path of moral ambiguity. Genetic engineering technologies raise questions about playing God and the nature of humanity. The automated cockpit asks people in mass to put their very lives in the hands of machines while robotic surgeons operate micrometers between life and death. Cybernetic prosthesis asks humans to be part robot. Drone aircraft provide extensive, nearly unfair, advances to an offensive military force while putting the defenders at a major disadvantage. Robotics and artificial intelligence research calls into question the definition of life. Driving many of these technological advancements is the philosophy of open source wherein information itself, the now basis of our resources, is inherently free to anyone who can use it–contrary to the traditional Lockean view of private intellectual property.
Genetic Engineering: Safety and Security
“Humans have long since possessed the tools for crafting a better world. Where love, compassion, altruism and justice have failed, genetic manipulation will not succeed.”
–Gina Maranto, Quest for Perfection
“The rapid…development of molecular genetics in the period from 1953 to 1970 provided the basis for understanding aspects of genetics at the molecular level that had only been imagined by prewar [World War II] geneticists” (Contemporary Genetics, 2009). Understanding how DNA replicates itself and how genes control cell function by using proteins that serve both structural and catalytic roles, the nature of the genetic code itself, and the way in which genes are controlled all suggest that soon human beings will be able to engineer themselves or other organisms in almost any conceivable direction (Lazou, 2002).
“The application of the new genetics to practical concerns, both in agriculture and medicine, raised a number of social, political, and ethical issues, some of which overlapped with concerns from the classical era and some of which were quite new to the molecular era” (Contemporary Genetics, 2009). In agriculture, one of the first great controversies to emerge concerned the technology for transferring genes from one organism to another. The common method for doing this has been to use a bacterium or virus as a transmission vector to inject the new DNA strand into the subject’s cellular material. Characteristics such as resistance to various insect and mold infestations, specifically, can be genetically engineered by transferring DNA from a species that has one of these traits to another one of higher commercial value (Contemporary Genetics, 2009). The controversies arising from the appearance of this technology reached significant proportions in the early 1980’s in Massachusetts where much of the experimental work was being carried out by Harvard and MIT biologists. Fears that viruses could escape into the community through the massive use of the new technology sparked a series of public meetings and calls for a moratorium on all genetic engineering until safeguards could be assured. Eventually, guidelines were incorporated into all grants funded by the National Institutes of Health based on some of the early decisions among molecular biologists themselves.
Especially in the agricultural realm, the issue of “genetically modified organisms” (GMOs) became a matter of global concern in the 1980’s and 1990’s. Critics of these new biotechnologies have argued that GMOs can have altered characteristics able to adversely affect the physiology of the consumer and the surrounding environment (Lazou, 2002). One such case became apparent in 1999 when corn that was genetically modified to be insect resistant, was killing off monarch butterflies in Great Britain.
“Indeed, as mega-corporations such as Monsanto and others turned aggressively to exploiting the GMO market, many countries, especially those in the European Union and Africa, began to place restrictions on, or even ban, the sale or importation of GMOs within their borders. The issue was less the effect on a specific species such as the monarch butterfly than the fact that destruction of the monarch symbolized a major problem with GMOs: as a result of competitive pressure from rival companies they were often rushed onto the market without thorough testing” (Contemporary Genetics, 2009).
A deep-rooted distrust of large agricultural corporations, who are seen as more concerned with profit than sustainability, has fueled much of the negative response to GMOs worldwide in addition to outcries from public health watchdog groups who want to be assured the long-term safety of the consumer.
Equally as important has been the issue of using human subjects in genetic research. The problem of informed consent has become a central aspect of the ethics of all human subject research protocols since the 1970’s. All universities and hospitals engaged in any sort of human genetic research are required to have internal review boards responsible for overseeing projects in which human subjects are involved (Contemporary Genetics, 2009). With regard to genetic information about individuals, the issue of consent is meant not only to ensure that individual subjects fully understand the nature of the research that they are taking part in, but also to place tight restrictions on who has access to the information. Of particular concern in clinical studies is whether individual subjects could be identified from “examining published or unpublished reports, notebooks, or other documents” (Contemporary Genetics, 2009). Anonymity has become the top priority of all modern genetic research involving human subjects.
As testing for genes known to be related to specific human genetic diseases, such as sickle-cell anemia, Huntington’s disease, or cystic fibrosis has been made available to clinicians, two questions have loomed large, especially in the United States: accuracy of the individual tests and access to the results. Dystopian fears that genetic information might lead to job or health care genetic screening programs have become more plausible. Even more concerning is the potential for private insurance companies to obtain–or even require–genetic testing of adults as the basis for medical coverage, or, in what seems eugenic in nature, dropping coverage if a fetus with a known genetic defect is born. Medical insurance companies have already attempted to classify genetic diseases as “prior conditions” that are thus exempt from coverage (Contemporary Genetics, 2009). Most of these plans have not been carried through, but the threat does exist, and it raises a host of legal, social, and psychological concerns not only for the individual, but for the welfare of society in general.
The Glass Cockpit: Making a Push-Button Pilot
“Now I know what a dog feels like watching TV.”
–Anon. DC-9 Captain regarding the A-320 Glass Cockpit
A glass cockpit is an aircraft cockpit that features electronic instrument displays. A relatively recent development, glass cockpits are highly wanted upgrades from traditional cockpits. Where a traditional cockpit relies on numerous mechanical gauges to display information, a glass cockpit utilizes a few computer-controlled displays that can be adjusted to display flight information as needed. This simplifies the cockpit extremely and allows pilots to focus only on the most important information. The NASA Columbia Supercomputer. …Which in some cases pilots need to have a smaller work load so they can have more time controlling the aircraft. Which means having a glass cockpit aircraft would be so much easier. With the invention of this we will have a better way to fly and make the aircraft look enhanced. The down side is that the aircraft will cost much more than a regular aircraft and will cost more to fix it if something was to go wrong. But most aircraft like the diamond DA42 and Twin star only come in the glass cockpit. While most basic trainers are coming out in glass to and most buyers are chose the option of having glass as well.
The glass cockpit has become standard equipment in airliners, business jets, and military aircraft, and was even fitted into NASA’s Space Shuttle. In the 1970’s an airliner would have over one hundred gauges and controls. NASA was the first group to research and develop the glass cockpit which was a LCD panel which lacked in glare and angle view.
The Unmanned Aerial Vehicle: Terminators in the Sky
“Listen and understand…It can’t be bargained with…It doesn’t pity, or remorse, or fear. And it will absolutely not stop–ever–until you are dead.”
–Kyle Reese (Michael Biehn), The Terminator
A new threshold in the history of air power is opening on a scene changed by the impact of a new weapon-delivery mode. The unmanned aerial vehicle or UAV is here as a viable element in aerospace power. Its use is Air Force mission area reconnaissance, air-to-ground strikes, and electronic warfare.
Since mid-1970 the aerospace trade developing and thinking the UAV. Due to the facts that: costs of new aircraft and increased effectiveness of defensive systems. Since World War II the cost of tactical aircraft has come to costing millions of dollars each, with some new generation vehicles costing more than fifteen million each. Therefore costs have driven modern aircraft to the point of being limited, high-value property. Improved defense systems have changed the use of more refined and costlier aircraft, but with higher wear and tear. The improved defense has also necessitated a three to fourfold increase in support aircraft for electronic countermeasures and Combat Air Patrol.
Since then, UAV’s have been developed for other applications, but operationally they have been used primarily in the reconnaissance role or as target drones. Another mission application was for tactical electronic warfare support. The activation of the 11th Tactical Drone Squadron on 1 July 1971 marks the beginning of employing unmanned vehicles in tactical operations.
The history of the Drone starts out under wraps until 1938 until the Army Air Corp contracted a radio controlled company to become Ventura Division of Northrop Corporation of radio controlled target drones. Which stated the first production line of radio controlled drones in the world. In World War II the U.S. actually had made the battle ready B-17 and B-24 into drone aircrafts to fly into heavily guarded Germany and coast of France. But of course this plan was abandoned because the heavy cost of making the aircrafts airworthy took a toll on the U.S.
“In the years immediately following World War II, much of the R&D activity was focused on the guided. Missile program The UAV found its role limited to target applications, which became the technological base for our current unmanned vehicles. A number of manned aircraft were modified for drone applications, again, primarily, in the target application.”(Assault Drones, n.d.).
Tensions during the early sixties provided the catalyst to employ the UAV in other than target applications. In 1962, two research and development photo reconnaissance UAV’s were created out of modified Firebee target drones. From this humble beginning an operational reconnaissance capability evolved, which was used in Southeast Asia.
The current inventory of USAF drone/UAV systems is directly related to the manner in which the programs developed historically. Usually, an existing target drone or a copy was selected for modification to meet an urgent operational reconnaissance need rather than expend the critical time required to design and develop the best possible radio controlled vehicle.
In this century or now a day the drones are being used for reconnaissance over in Iran and Iraq for our soldiers. Like most people in the Air force want to be pilots of a jet aircraft are actually being a signed to the drone program so we can help save soldiers life on the ground. They are flying out of bases in other countries as some of the pilots or radio operators are in the U.S. or in other parts of the world. The cockpit they sit in is realistic like a jet fighter and also has views like you are in the drone flying it.
Artificial Intelligence: Hello, Computer
“We are all, by any practical definition of the words, foolproof and incapable of error.”
–HAL 9000 (Douglas Rain), 2001: A Space Odyssey
AI–or Artificial Intelligence–is the division of computer science that deals with writing computer programs that can crack problems resourcefully. AI is generally used in medical systems and also can be found in industrial robots. “Today developers can build systems that meet the advanced information processing needs of government and industry by choosing from a broad palette of mature technologies. Sophisticated methods for reasoning about uncertainty and for coping with incomplete knowledge have led to more robust diagnostic and planning systems. Hybrid technologies that combine symbolic representations of knowledge with more quantitative representations inspired by biological information processing systems have resulted in more flexible, human-like behavior.”(Waltz, 1996).
“AI began as an attempt to answer some of the most fundamental questions about human existence by understanding the nature of intelligence, but it has grown into a scientific and technological field affecting many aspects of commerce and society.”(Waltz, 1996)
Robotics: Humanity’s Replacement?
“A robot may not injure humanity, or, through inaction, allow humanity to come to harm.”
“Robotics is the use of technology to design and manufacture (intelligent) machines, built for specific purposes, programmed to perform specific tasks” (Ethical Issues, 2008). The technology of robotics is growing rapidly. “Robots are very visible machines, ranging from small, miniature machines, to large crane size constructions with intelligence varying from simple programming to perform mechanical tasks, such as painting a car or lifting cargo, to highly complex reasoning algorithms mimicking human thought” (Ethical Issues, 2008). Many ethical questions have been raised from the development and increasing use of robotics. “The question whether it is ethically and morally responsible to manufacture robot workers – and androids – is one of the most frequently asked questions when it comes to robots and artificial intelligence” (Ethical Issues, 2008). To this question, there is not an easy answer.
“The argument that robot workers take jobs from human workers is true. It is also true that these jobs are generally repetitive jobs, monotonous and often hazardous to human workers. Is it wrong then to replace humans with robots in these cases?” (Ethical Issues, 2008). If there are still enough jobs left for the humans and as long as the robots are not causing thousands of humans to completely lose their jobs then it is not wrong. “A more detailed answer lies in the progress and development of countries as well as advances in science and technology” (Ethical Issues, 2008). Many of the wealthier countries have allowed the science and technology of robotics to advance. The people in these countries are also advancing their intellect. The need for human workers in factories is decreasing and even the uneducated humans are becoming wealthier and are not as willing to work in factories.
Now manufacturers have a few options to consider when running their factories. One option is to use robots to work in the factories instead of using humans. This option is good because it reduces cost and is more efficient. However, in order to keep the people happy another approach could be to use migrant workers to work in semi-automated factories. This keeps the people happy but causes social and financial difficulties. The most common approach is to combine to two options above and “move the factory to a low income country AND employ robot workers. In this scenario, yes, human workers lose out all around…So the real question is how to obtain a balance between using the development of technology without causing undue hardship?” (Ethical Issues, 2008).
Robots are comparable to computers because they can both be valuable tools in our everyday and working lives. Robots are taking over more of the cyclic, hazardous and time consuming tasks so that we can spend our time more valuably. For example, “Provided the costs are low, a farmer can employ agricultural robots that till and seed the land, do the weeding and harvest the crops” (Ethical Issues, 2008). If robots could run on solar energy, it would be even better. Also, in the food industry, robots are cleaner and more humane butchers than humans. When it comes to pollution, robots can clean up substantial amounts of waste on the land and in the water. They can even reforest the land. In the home, robots have already begun to help with the house cleaning and chores. The iRobot Roomba is a vacuum cleaning robot that vacuums a household with little input from humans. If more robots are created to do household cleaning and chores, humans will have more time for leisure activities (Ethical Issues, 2008).
In hospitals robots can assist in laboratories and operating rooms. For example, robots can distribute medicines, do cleaning work, and even act like receptionists. At Aizu Central Hospital in Aizu-Wakamatsu, Japan, an android receptionist and two porters work together with humans. The receptionist robot welcomes patients and answers questions that they might have, and the two porters can carry luggage and take patients to their rooms or other destinations in the hospital (The Future, 2006). Robots are also able to do basic surgical procedures. “The possibility of robots working at a micro precision scale may even make them more suitable for these procedures” (Ethical Issues, 2008). According to a study by the University of Maryland, since robotic surgeons make “a smaller incision, patients recovered faster. They were out of the hospital faster, had fewer complications, and the blood vessels were more likely to stay open” (Blankenhorn, 2008). In fact, robots can be manufactured to do all the things that we, as humans, do not want to do for any reason. Is it ethical to allow robots to do all of the things humans do not want to do? Where would that leave the humans? Without jobs, and with only leisure activities to do, how are humans going to make money to pay for their leisure activities? If robots are used as workers are they also going to be paid for their work?” (Ethical Issues, 2008).
A new and astonishing use of robots is also being researched. David Levy made a statement saying, “There’s a trend of robots becoming more human-like in appearance and coming more in contact with humans” (Choi, 2007). At first, robots were used impersonally. They were used in factories where they helped build automobiles, in offices to deliver mail, or to show visitors around museums. Now, robots are being used more affectionately. For example, toys like Sony’s Aibo robot dog, or Tyco’s Tickle Me Elmo, or digital pets like Bandai’s Tamagotchi are loved and enjoyed by children. Because of the affection created by these robots, Levy created a theory. “In his thesis, ‘Intimate Relationships with Artificial Partners,’ Levy conjectures that robots will become so human-like in appearance, function and personality that many people will fall in love with them, have sex with them and even marry them. ‘It may sound a little weird, but it isn’t,’ Levy said. ‘Love and sex with robots are inevitable’” (Choi, 2007). Robots are truly becoming more like humans. A robot named Dexter has even taught itself to walk.
“Dexter took its first tentative steps only a few days after it first discovered how to stand upright. Dexter’s designers say their robot differs from commercially available predecessors because it can learn from its mistakes” (Walking Robot, 2007). Is it ethical for humans to have a relationship with robots? People are likely chose robots over humans to have relationships with. A robot partner could be programmed to be the perfect mate for a human so that disagreements between the two would be minimal or not existent at all. However, a relationship between humans and robots is prone to be treated with some hostility as relationships between the same sexes were treated at first.
Cybernetics: The Next Evolution of Mankind
“I am C-3PO, Human-Cyborg Relations”
–C-3PO (Anthony Daniels), Star Wars
In the medical field, cybernetic prosthesis asks humans to replace one or more parts of their body with robotics. “A highly dexterous, bio-inspired artificial hand and sensory system that could provide patients with active feeling, is being developed by a European project” (Cybernetic Hand, 2005). The Cyberhand project intends to go beyond what humans can imagine in prosthesis. The project plans to hardwire this hand into the nervous system. This action will allow “sensory feedback from the hand to reach the brain, and instructions to come from the brain to control the hand, at least in part” (Cybernetic Hand, 2005). Is allowing a robotic hand to be wired to the brain ethical? The idea seems to be a fantastic medical breakthrough but the humans who use the Cyberhand are going to be part robot. Is there a limit on how far humans should be able to go when replacing a body part or enhancing a body part with robotics?
It will soon be possible to enhance the human brain with electronic “plug-ins” or even by genetic enhancement. “What will this mean for the future of humanity? This was the theme of a recent Neuroscience in Context meeting in Berlin, Germany, where anthropologists, technologists, neurologists, archaeologists and philosophers met to consider the implications of this next stage of human brain development” (Boosting Brainpower, 2009). Could the brain enhancement further widen the gap between the social statuses of the human race or even make people super human in their intellect? “Onto the Ethical Issues discussed in the article, most are fairly basic. Starting with human dignity, referring to comments made by Dietrich Birnbacher, a philosopher at the University of Düsseldorf in Germany: One potential problem arises from altering what we consider to be “normal”: the dangers are similar to the social pressure to conform to idealised forms of beauty, physique or sporting ability that we see today. People without enhancement could come to see themselves as failures, have lower self-esteem or even be discriminated against by those whose brains have been enhanced, Birnbacher says” (Boosting Brainpower, 2009).
The American Heritage Dictionary defines a “cyborg” which is short for “cybernetic organism” as, “a human who has certain physiological processes aided or controlled by mechanical or electronic devices” (Ask a Scientist, 2008) According to this definition, thousands of cyborgs live among us right now. “Anyone who has a pacemaker to promote a normal heartbeat, a prosthetic leg with electronic motors, or wears a hearing aid could be considered a cyborg. While many of these prostheses are designed to replace lost abilities, others are designed to enhance ones that already work” (Ask a Scientist, 2008). A company called Cyberkinetics recently received approval from the government to experiment with neural prosthesis which would permit humans with severe paralysis to send commands to a computer using only their thoughts. “If this technology works, then people who aren’t paralyzed might also be able to use it to supplement their normal abilities. The advanced cyborgs of the future—some of whom may be elected to government—may simply be regular humans with biological implants that give them super-human abilities” (Ask a Scientist, 2008).
Telemedicine and robotics serve the ethical principle in that they expand the amount of practitioners of many medical disciplines to make their services available in areas they cannot possibly reach in person. Robotics can thus diminish the lack of medical specialists in underserved regions and countries. However, there is the risk that these robotics may aggravate relocation of medical specialists from low-resource areas, by allowing them means to serve the countries or areas they leave, by electronic and robotic technologies. “In its 1999 statement on telemedicine, the World Medical Association emphasizes that regardless of the telemedicine system under which the physician is operating, the principles of medical ethics globally binding upon the medical profession must never be compromised” (Dickens & Cook, 2006). These include such matters as “ensuring confidentiality, reliability of equipment, the offering of opinions only when possessing necessary information, and contemporaneous record-keeping” (Dickens & Cook, 2006). Can robots treat patients in an ethical manner? Are robots going to ever have opinions? These questions are hard to answer because robots can be made to treat patients but they will be lacking the human aspects that medical specialists need to have.
Open Source: Advancement Through Collaboration
“In a world without walls, who needs Gates and Windows?”
–Scott McNealy on Microsoft
A long-standing, but quickly-growing debate in the world of business information technology is that of using open source versus closed source software. Open source generally denotes software that is freely available to acquire, distribute, modify, and adapt depending on the end-users’ needs. This concept, however, is not restricted solely to software. A broader definition would be one that includes any sort of technology in which the end-user has free (as in “freedom”) access to the products’ source material (Wikipedia, 2009).
The most fundamental ethical issue behind open source is a question of ownership. When one creates a new piece of software, that developer, traditionally, has had a Lockean private property sense of ownership over it wherein only the developer has access to the information and controls all aspects of where and how the product is distributed. This Draconian view stifles innovation and discourages criticism and peer review which is so key to advancing technology. With the open source model, technology is subject to immediate review and feedback to create better products faster and more in line with the end-users’ needs.
In the economy of the 21st century, information is pivotal to all advancement. Unfortunately, information itself is impossible to put a fair price on and impossible to keep value in (Velazquez, 2006). Once an individual has obtained that information, it is useless–as well as morally bankrupt–to keep it to oneself, excepting very rare instances. Companies attempt to control this information by making their products closed–refusing to provide details regarding the nature of their software–and protecting those secrets at any cost. Some even go so far as to take legal action for another entity reverse-engineering a product in order to comply with certain interface standards (Spinello, 1997). The open source model attempts to put a moral imperative to share information–in a utilitarian sense that all parties receive a net gain through technological advancement–into the minds of software developers.
Driving the argument toward open-source development is the idea of interface standards, whereby multiple developers can produce for one infrastructure and consumers benefit the most through pure competition versus a top-down approach to development where one entity controls who will develop for a particular platform. In the 1980’s, Apple had a technologically-superior product in their Macintosh computer, but sales slumped in the wake of the open standard IBM PC which allowed for multiple operating systems and a plethora of software titles to be developed faster and distributed more easily than the Macintosh. Microsoft continued to ensure this market dominance by allowing certain parts of their Windows OS source code to be available (in the form of “libraries” that supplemented the closed-source behemoth) for developers to freely adapt their products to the mushrooming interface standard. Apple, since their 2001 renaissance, has still embraced a closed-source model for all their developing, especially for their “killer app,” the iPhone, and it may again prove to be their Achilles heel as open source giant Google’s Android platform is gaining serious momentum, poised to topple the de-facto king of the mobile computing market (Roth, 2008).
In addition to providing market analysts with interesting article-fodder, the interface standard debate has started to appear in the academic arena as specialists debate the need for a universal standard by which newcomers to the IT field can learn during their postsecondary education and not have to be re-taught a new, proprietary system every time they change employers. This kind of redundancy inhibits productivity and wastes valuable resources which can be better allocated to support and improvement roles (Chua, 2005). Also in the business IT field, is the question of using unlicensed software. Often times, because of monopolistic forces brought on by closed-source products (notably Microsoft Windows and Office), companies adopt these software suites as their standard, but are forced to pay unfair prices in order to legally use them. This is where open source products can certainly make a difference. Custom derivations of the Linux operating system can be produced for little to no cost, for example, and distributed across a corporate infrastructure while open source productivity suites (a la OpenOffice.org and Google Documents)–even though they may not be as “pretty” and as their commercial, closed source counterparts–can provide all the functions a company needs to communicate internally and externally.
Quite possibly the most poignant argument between open and closed source information technology lies in the nature of security. As stated, information is power in our society and, while the majority of (non-personal) information should inherently be free, certain kinds of privileged, personal information should be kept confidential. This information includes financial records, medical records, personal identification numbers, and so-forth. To keep this information secure, certain protocols have to be adapted. In the world of closed source software, as is evidenced by Microsoft Windows’s track record for gaping security flaws and proliferation of malware, this is not so easily done. The problem again lies in the lack of available peer review and transparency with regard to its methods of storing and transmitting information. The idea of “security through obscurity” neglects any thought of a malicious individual breaking through the closed source barriers and only confounds the problem by preventing concerned developers from immediately identifying, diagnosing, and fixing the problem. In the open source world, however, malware is almost unheard of as any security flaws are nearly immediately recognised through peer review and repaired just as fast. Thus, security is maintained through transparency and collaboration rather than through walls and litigation (Chua, 2005).
The open source philosophy transcends mere software development and can easily permeate every aspect of our society by encouraging a utilitarian idea of fairness that supports Adam Smith’s “invisible hand.” The spirit of cooperation and the spirit of competition work together to push technology forward–just like a spirit of openness and freedom allowed the Western world to triumph over the closed, walled-off Soviet empire during the Cold War. There is no technology in existence that did not have its origins in a previous idea or design and there should not be legal barriers to continuing this practice. Technological advancement is inherently organic; it evolves just as species in the wild do, and there should be no hindrance to this effect (Wikipedia, 2009).
Ask a Scientist. (2008, November 27). Retrieved November 14, 2009
Assault Drone. (2009, September 27). Retrieved November 16, 2009
Blankenhorn, Dana. (2008, April 28). Study Calls Robot the Better Surgeon. Retrieved November 14, 2009 from ZDNet Healthcare
Boosting Brainpower. (2009, May 14). Retrieved November 14, 2009
Choi, Charles Q. (2007, October 12). Sex and Marriage With Robots? It Could Happen. Retrieved November 14, 2009 from MSNBC
Chua, Sacha. (2005, January 4). Ethical Issues in Open Source [Web log message].
Contemporary Genetics – Dna, Genomics, And The New Ethical Dilemmas. (2009). Retrieved November 20, 2009
Cybernetic Hand Prosthesis is Under Development. (2005, December 12). Retrieved November 14, 2009
Dickens, Bernard, & Cook, Rebecca J. (2006). Legal and Ethical Issues in Telemedicine and Robotics. International Journal of Gynecology and Obstetrics, 94, 73-78.
Ethical Issues Concerning Robots and Android Humanoids. (2008, June 5). Retrieved November 14, 2009
Glass Cockpit. in Wikipedia. Retrieved November 22, 2009
Lazou, Chris. (2002, July 22). Ethical Issues – Genetic Engineering. Retrieved November 20, 2009, from Primeur Weekly website
Open Source. (n.d.). in Wikipedia. Retrieved November 22, 2009
Roth, Daniel. (2008, June 23). Google’s Open Source Android OS Will Free the Wireless Web. Retrieved November 22, 2009 from Wired
Spinello, Richard A. (1997). Software Compatibility and Reverse Engineering. In Richard A. Spinello, Case Studies in Information and Computer Ethics. (pp. 142-145). Upper Saddle River, NJ: Prentice Hall.
The Future is Here. (2006, November 5). Retrieved November 14, 2009
Unmanned Aerial Vehicle. (n.d.). in Wikipedia. Retrieved November 22, 2009
Velazquez, Manuel G. (2006). Business Ethics: Concepts and Cases, Sixth Edition. Upper Saddle River, NJ: Pearson Education.
Walking Robot Steps Up The Pace. (2007, March 2). Retrieved November 14, 2009
Waltz, David L. (1996). Artificial Intelligence: Realizing the Ultimate Promises of Computing. Retrieved November 22, 2009