Category Archives: Technology

Breaking the Helicopter Speed Barrier


New helicopter missions in both the commercial and military environment demand newer, faster helicopters. The major problem with this necessity is the absolute upper limit of rotary-wing airspeed. Several designers, including Bell, Boeing, Piasecki, and Sikorsky have brought various ideas for designs to break the helicopter speed barrier.

Since the first powered flight in 1903, inventors and innovators have refined and redesigned existing aircraft in order to make them fly higher, farther, and faster than before. That race is still continuing in the world of fixed-wing aircraft. Since the development of the helicopter in 1913, the focus in rotary-wing development has been stability and stationary power—being able to hover higher and longer or carry more weight. Little thought has been put into rotary speed proportional to power until the last twenty years or so when newer composite materials have been developed to withstand the strains put on them by increased aerodynamic loading. Augusta-Westland has been at the forefront of helicopter speed development since 1986 when an AW Lynx set the world speed record at 249.1 mph. (“Maximum Forward Speed”) As helicopter missions have become more varied and applicable, new designs have been needed to accommodate the growing segment of the industry (Hambling, 2008).

The main problem affecting helicopter maximum forward speed, or VNE, is a phenomenon unique to rotary-wing aircraft known as dyssymmetry of lift. Dyssymmetry of lift describes the condition wherein the advancing and retreating blades, due to their movement compared to the relative wind, have unequal airspeeds and generate unequal amounts of lift. To compensate for this, helicopter rotor systems are articulated in such as way as to allow the blade to “flap” up or down, changing the angle of attack and equalising the lift forces on both sides of the aircraft. Unfortunately, the faster a helicopter travels forward, the more lift must be compensated for (lift increases exponentially as airspeed increases) and the more the retreating blade must increase its angle of attack. In a traditionally-articulated helicopter, the absolute upper airspeed for any helicopter is approximately 250 mph. At this point, even the most advanced rotor systems and composite materials cannot prevent exceeding the critical angle of attack on the retreating side or creating shockwave-induced flow separation on the advancing side, and will, invariably, stall.

The first major foray into attempting to create a high-speed aircraft that maintained the ability to hover was the Bell-Boeing V-22 Osprey, a hybrid tilt-rotor aircraft that could takeoff, land, and hover like an helicopter while maintaining the high-speed and long-range cruise capacity of a turbo-prop airplane. This development was predicated by the U.S. Military’s need for a multi-role transport aircraft that could deploy troops and equipment to remote locations without a traditional runway and do it faster than a traditional helicopter. This project has seen numerous fall-backs since inception, including technical flaws (instability being chief among them) and fatal crashes as result of those flaws.

Seeing the development of the Osprey wrought with problems (and due to the military’s need for a reliable escort for the “Thundering Chicken”), Piasecki Aircraft has begun work on a modification of a Sikorsky YSH-60F Seahawk. Piasecki Aircraft has been a true innovator in the industry, developing the CH-47 Chinook and the CH-46 Sea Knight—reciprocating rotor systems that create twice the lifting power while counteracting torque with two sets of counter-rotating blades. The modified YSH-60F, dubbed “Speedhawk,” takes the traditional tail rotor system and replaces it with a movable “pusher propeller” to generate more forward speed without having to tilt the main rotor disc while still providing anti-torque capabilities at low speeds. This design has been looked at being adapted for the civil market as a faster transport helicopter for offshore oil rig crews, and is the closest to actual implementation as Piasecki has already developed a working prototype that fits all current military specifications.

Another traditional innovator in helicopter design, Sikorsky, is working on designs to also smash the 250 mph speed barrier imposed by traditional design limitations. The Sikorsky X2 helicopter design utilises another unusual design element: a coaxial rotor system. This coaxial system takes the essential anti-torque tandem rotor system used in the Chinook, and stacks the two systems on top of each other, as in the Russian Kamov series attack helicopters. This coaxial design has proved quite successful in Russian military operations, creating helicopters that push the design limitations of speed without sacrificing stability as in the Lynx. Sikorsky plans to pair this coaxial system with a pusher propeller in order to achieve planned speeds in excess of 280 mph.

As rotary-wing design thinking moves more and more “out of the box,” innovative designs will begin take hold and the basic shape of the helicopter, as we know it today, will seem as distant as that of the autogyro. New commuter missions will demand that helicopters be able to get people between city centers faster than terrestrial travel and directly to locations not necessarily adjacent to airports. Air ambulance services will demand faster transport for patients. The oil industry is seeking to adapt the Piasecki designs once they’re approved for production as a faster, more reliable “rig runner”. European transportation markets already make extensive use of helicopters for intercity travel, especially in mountainous or especially remote areas where traditional runways cannot effectively be built. The biggest market for these new “super-speed” helicopters will still surely be the military, where demand for effective, versatile attack aircraft has only increased as warfare has become more surgical and precise, but, like any technology that has developed from government investments, the private sector will find extremely clever uses for the hardware—given the chance.


Dyssymmetry of lift. (n.d.) In Wikipedia, the free encyclopedia. Retrieved from

Hambling, David. (April 18, 2008). Speedhawk Challenges Osprey. Wired. Retrieved from

Hodge, Nathan. (December 26, 2008). The Quest for the 300-m.p.h. Helicopter. Wired. Retrieved from

Kamov. (n.d.) In Wikipedia, the free encyclopedia. Retrieved from

Maximum Forward Speed. (n.d.) In Retrieved from

Skillings, Jonathan. (February 26, 2008). Sikorsky’s Helicopter of the Future. Cnet News. Retrieved from

Sikorsky Eyes Helicopter Speed Record. (April 20, 2009). In SmartBrief. Retrieved from 5256388513E0&brief=AIA&sb_code=rss&&campaign=rss

Thompson, Mark. (September 26, 2007). V-22 Osprey: A Flying Shame. Time. Retrieved from http://,8599,1665835,00.html

V-22 Osprey. (n.d.) In Wikipedia, the free encyclopedia. Retrieved from

Cross-cut Paper Shredder Teardown and Repair

I managed to completely jam my paper shredder during a bout of document disposal last summer. An old credit report managed to wrap itself around the blade drum and would not back out, so I had to either buy a new shredder or tear this one apart and fix it!

Be sure to like, share, comment, and subscribe! Tally-ho, y’all!

How To Use A (Real) Router With AT&T U-Verse DSL

AT&T U-Verse DSL is many things: expensive, sub-par, flaky, but the worst part about it (besides the utter contempt they have for their customers and the monopolistic attitude of their executives) is the downright shitty quality of their highly-touted “residential gateways”. These are just glorified DSL modems with (barely) built-in WiFi and a barebones user interface. If you have only a computer and a phone (and maybe a tablet), it will suffice (as long as you stay in the same room), but if you’re going to be streaming to your TV, playing online games, setting up IoT devices, or using any other manner of modern technology (in addition to your phone, tablet, and computer), you absolutely will need a real router. Don’t get taken by the monopoly phone company, make sure they don’t charge you for the gateway, and buy yourself a proper router–it saves so much headache and hassle with just a little extra setup cost and effort!

Assuming you’ve already bought a proper 802.11ac router and at least plugged it in, you’re going to want to connect to the device and make sure the connection type is set to “Dynamic IP (DHCP-Assigned)”. You may need to refer to your router’s instruction manual to reference how to do this. Go ahead and set up the residential gateway by plugging it in to a power source and connecting the DSL (phone) line as the AT&T installer did when he got his muddy footprints all over your carpet. Run an Ethernet cable from the “Broadband” port on your router (it may also be labeled “WAN” or simply “Internet”) to one of the available LAN ports on the back of the gateway. With another Ethernet cable, connect a computer to another open LAN port on the gateway.

This page looks like 1998 threw up all over it. (

On the computer you just connected, open a browser and point it to This will take you to the residential gateway’s settings interface. For the price that AT&T charges for their horrid little modems, you would think that they might invest a little bit in UI design. Once your eyes stop bleeding, click the “Home Network” link at the top. On the right side of the page, you should see a box labeled “Status At A Glance”. Click the “DISABLE” button next to Wireless.

The system will ask you if you are sure you want to disable the built-in wireless router, to which you should respond “CONFIRM”.

Now, we need to edit the firewall settings for the new router. In the “Local Devices” box on the left, identify the wireless router from the list (there really should only be two options, and you can easily narrow it down if the gateway is only displaying IP addresses–just confirm the computer’s local address) and click “Edit firewall settings”.

On the next page, make sure the router is selected from the drop-down menu under the “Select a computer” heading. Then, click the radio button next to “Allow all applications (DMZplus mode)” and then the “DONE” button.

Verify all the settings are correct as you put them in (the device is the router, all applications are allowed, all protocols are allowed, all ports are open), then return to the home screen. Close your browser, unplug the Ethernet from the computer, then cycle the power on the router. Once the router is back up and broadcasting, connect a device to the wireless network, open a website, and voilà! Now you can get stronger WiFi signals, better connections, and have more granular control over your network than you could with one of those terrible little AT&T residential gateways alone!

MCI Primetime (circa 1989)

In the 1980s, long-distance telephone service suddenly found itself competing on the open market thanks to antitrust regulation against Bell Corporation. MCI provided some stiff competition for AT&T for many years until they folded into Verizon–who still gives AT&T a decent level of competition.


Ethical Issues Surrounding Emerging Technologies

Matthew Eargle, Gregg Hamilton, and Mary Morgan

Middle Georgia College


Technology is rapidly improving throughout every aspect of society.   While some of these advancements are mundane, some walk the path of moral ambiguity.   Genetic engineering technologies raise questions about playing God and the nature of humanity.   The automated cockpit asks people in mass to put their very lives in the hands of machines while robotic surgeons operate micrometers between life and death.   Cybernetic prosthesis asks humans to be part robot.   Drone aircraft provide extensive, nearly unfair, advances to an offensive military force while putting the defenders at a major disadvantage.   Robotics and artificial intelligence research calls into question the definition of life.   Driving many of these technological advancements is the philosophy of open source wherein information itself, the now basis of our resources, is inherently free to anyone who can use it–contrary to the traditional Lockean view of private intellectual property.

Genetic Engineering: Safety and Security

“Humans have long since possessed the tools for crafting a better world. Where love, compassion, altruism and justice have failed, genetic manipulation will not succeed.”

–Gina Maranto, Quest for Perfection

“The rapid…development of molecular genetics in the period from 1953 to 1970 provided the basis for understanding aspects of genetics at the molecular level that had only been imagined by prewar [World War II] geneticists” (Contemporary Genetics, 2009).  Understanding how DNA replicates itself and how genes control cell function by using proteins that serve both structural and catalytic roles, the nature of the genetic code itself, and the way in which genes are controlled all suggest that soon human beings will be able to engineer themselves or other organisms in almost any conceivable direction (Lazou, 2002).

“The application of the new genetics to practical concerns, both in agriculture and medicine, raised a number of social, political, and ethical issues, some of which overlapped with concerns from the classical era and some of which were quite new to the molecular era” (Contemporary Genetics, 2009).  In agriculture, one of the first great controversies to emerge concerned the technology for transferring genes from one organism to another.  The common method for doing this has been to use a bacterium or virus as a transmission vector to inject the new DNA strand into the subject’s cellular material.  Characteristics such as resistance to various insect and mold infestations, specifically, can be genetically engineered by transferring DNA from a species that has one of these traits to another one of higher commercial value (Contemporary Genetics, 2009).  The controversies arising from the appearance of this technology reached significant proportions in the early 1980’s in Massachusetts where much of the experimental work was being carried out by Harvard and MIT biologists.  Fears that viruses could escape into the community through the massive use of the new technology sparked a series of public meetings and calls for a moratorium on all genetic engineering until safeguards could be assured.  Eventually, guidelines were incorporated into all grants funded by the National Institutes of Health based on some of the early decisions among molecular biologists themselves.

Especially in the agricultural realm, the issue of “genetically modified organisms” (GMOs) became a matter of global concern in the 1980’s and 1990’s.  Critics of these new biotechnologies have argued that GMOs can have altered characteristics able to adversely affect the physiology of the consumer and the surrounding environment (Lazou, 2002).  One such case became apparent in 1999 when corn that was genetically modified to be insect resistant, was killing off monarch butterflies in Great Britain.

“Indeed, as mega-corporations such as Monsanto and others turned aggressively to exploiting the GMO market, many countries, especially those in the European Union and Africa, began to place restrictions on, or even ban, the sale or importation of GMOs within their borders.  The issue was less the effect on a specific species such as the monarch butterfly than the fact that destruction of the monarch symbolized a major problem with GMOs: as a result of competitive pressure from rival companies they were often rushed onto the market without thorough testing” (Contemporary Genetics, 2009).

A deep-rooted distrust of large agricultural corporations, who are seen as more concerned with profit than sustainability, has fueled much of the negative response to GMOs worldwide in addition to outcries from public health watchdog groups who want to be assured the long-term safety of the consumer.

Equally as important has been the issue of using human subjects in genetic research.  The problem of informed consent has become a central aspect of the ethics of all human subject research protocols since the 1970’s.  All universities and hospitals engaged in any sort of human genetic research are required to have internal review boards responsible for overseeing projects in which human subjects are involved (Contemporary Genetics, 2009).  With regard to genetic information about individuals, the issue of consent is meant not only to ensure that individual subjects fully understand the nature of the research that they are taking part in, but also to place tight restrictions on who has access to the information.  Of particular concern in clinical studies is whether individual subjects could be identified from “examining published or unpublished reports, notebooks, or other documents” (Contemporary Genetics, 2009).  Anonymity has become the top priority of all modern genetic research involving human subjects.

As testing for genes known to be related to specific human genetic diseases, such as sickle-cell anemia, Huntington’s disease, or cystic fibrosis has been made available to clinicians, two questions have loomed large, especially in the United States: accuracy of the individual tests and access to the results.  Dystopian fears that genetic information might lead to job or health care genetic screening programs have become more plausible.  Even more concerning is the potential for private insurance companies to obtain–or even require–genetic testing of adults as the basis for medical coverage, or, in what seems eugenic in nature, dropping coverage if a fetus with a known genetic defect is born.  Medical insurance companies have already attempted to classify genetic diseases as “prior conditions” that are thus exempt from coverage (Contemporary Genetics, 2009).  Most of these plans have not been carried through, but the threat does exist, and it raises a host of legal, social, and psychological concerns not only for the individual, but for the welfare of society in general.

The Glass Cockpit: Making a Push-Button Pilot

“Now I know what a dog feels like watching TV.”

–Anon. DC-9 Captain regarding the A-320 Glass Cockpit

A glass cockpit is an aircraft cockpit that features electronic instrument displays.  A relatively recent development, glass cockpits are highly wanted upgrades from traditional cockpits.  Where a traditional cockpit relies on numerous mechanical gauges to display information, a glass cockpit utilizes a few computer-controlled displays that can be adjusted to display flight information as needed.  This simplifies the cockpit extremely and allows pilots to focus only on the most important information.  The NASA Columbia Supercomputer. …Which in some cases pilots need to have a smaller work load so they can have more time controlling the aircraft.  Which means having a glass cockpit aircraft would be so much easier.  With the invention of this we will have a better way to fly and make the aircraft look enhanced.  The down side is that the aircraft will cost much more than a regular aircraft and will cost more to fix it if something was to go wrong.  But most aircraft like the diamond DA42 and Twin star only come in the glass cockpit.  While most basic trainers are coming out in glass to and most buyers are chose the option of having glass as well.

The glass cockpit has become standard equipment in airliners, business jets, and military aircraft, and was even fitted into NASA’s Space Shuttle.  In the 1970’s an airliner would have over one hundred gauges and controls.  NASA was the first group to research and develop the glass cockpit which was a LCD panel which lacked in glare and angle view.

The Unmanned Aerial Vehicle: Terminators in the Sky

“Listen and understand…It can’t be bargained with…It doesn’t pity, or remorse, or fear.  And it will absolutely not stop–ever–until you are dead.”

–Kyle Reese (Michael Biehn), The Terminator

A new threshold in the history of air power is opening on a scene changed by the impact of a new weapon-delivery mode.  The unmanned aerial vehicle or UAV is here as a viable element in aerospace power.  Its use is Air Force mission area reconnaissance, air-to-ground strikes, and electronic warfare.

Since mid-1970 the aerospace trade developing and thinking the UAV.  Due to the facts that: costs of new aircraft and increased effectiveness of defensive systems.  Since World War II the cost of tactical aircraft has come to costing millions of dollars each, with some new generation vehicles costing more than fifteen million each.  Therefore costs have driven modern aircraft to the point of being limited, high-value property.  Improved defense systems have changed the use of more refined and costlier aircraft, but with higher wear and tear.  The improved defense has also necessitated a three to fourfold increase in support aircraft for electronic countermeasures and Combat Air Patrol.

Since then, UAV’s have been developed for other applications, but operationally they have been used primarily in the reconnaissance role or as target drones.  Another mission application was for tactical electronic warfare support.  The activation of the 11th Tactical Drone Squadron on 1 July 1971 marks the beginning of employing unmanned vehicles in tactical operations.

The history of the Drone starts out under wraps until 1938 until the Army Air Corp contracted a radio controlled company to become Ventura Division of Northrop Corporation of radio controlled target drones.  Which stated the first production line of radio controlled drones in the world.  In World War II the U.S.  actually had made the battle ready B-17 and B-24 into drone aircrafts to fly into heavily guarded Germany and coast of France.  But of course this plan was abandoned because the heavy cost of making the aircrafts airworthy took a toll on the U.S.

“In the years immediately following World War II, much of the R&D activity was focused on the guided.  Missile program The UAV found its role limited to target applications, which became the technological base for our current unmanned vehicles.  A number of manned aircraft were modified for drone applications, again, primarily, in the target application.”(Assault Drones, n.d.).

Tensions during the early sixties provided the catalyst to employ the UAV in other than target applications.  In 1962, two research and development photo reconnaissance UAV’s were created out of modified Firebee target drones.  From this humble beginning an operational reconnaissance capability evolved, which was used in Southeast Asia.

The current inventory of USAF drone/UAV systems is directly related to the manner in which the programs developed historically.  Usually, an existing target drone or a copy was selected for modification to meet an urgent operational reconnaissance need rather than expend the critical time required to design and develop the best possible radio controlled vehicle.

In this century or now a day the drones are being used for reconnaissance over in Iran and Iraq for our soldiers.  Like most people in the Air force want to be pilots of a jet aircraft are actually being a signed to the drone program so we can help save soldiers life on the ground.  They are flying out of bases in other countries as some of the pilots or radio operators are in the U.S.  or in other parts of the world.  The cockpit they sit in is realistic like a jet fighter and also has views like you are in the drone flying it.

Artificial Intelligence: Hello, Computer

“We are all, by any practical definition of the words, foolproof and incapable of error.”

–HAL 9000 (Douglas Rain), 2001: A Space Odyssey

AI–or Artificial Intelligence–is the division of computer science that deals with writing computer programs that can crack problems resourcefully.  AI is generally used in medical systems and also can be found in industrial robots.  “Today developers can build systems that meet the advanced information processing needs of government and industry by choosing from a broad palette of mature technologies.  Sophisticated methods for reasoning about uncertainty and for coping with incomplete knowledge have led to more robust diagnostic and planning systems.  Hybrid technologies that combine symbolic representations of knowledge with more quantitative representations inspired by biological information processing systems have resulted in more flexible, human-like behavior.”(Waltz, 1996).

“AI began as an attempt to answer some of the most fundamental questions about human existence by understanding the nature of intelligence, but it has grown into a scientific and technological field affecting many aspects of commerce and society.”(Waltz, 1996)

Robotics: Humanity’s Replacement?

“A robot may not injure humanity, or, through inaction, allow humanity to come to harm.”

–Isaac Asimov

 “Robotics is the use of technology to design and manufacture (intelligent) machines, built for specific purposes, programmed to perform specific tasks” (Ethical Issues, 2008).  The technology of robotics is growing rapidly.  “Robots are very visible machines, ranging from small, miniature machines, to large crane size constructions with intelligence varying from simple programming to perform mechanical tasks, such as painting a car or lifting cargo, to highly complex reasoning algorithms mimicking human thought” (Ethical Issues, 2008).  Many ethical questions have been raised from the development and increasing use of robotics.  “The question whether it is ethically and morally responsible to manufacture robot workers – and androids – is one of the most frequently asked questions when it comes to robots and artificial intelligence” (Ethical Issues, 2008).  To this question, there is not an easy answer.

“The argument that robot workers take jobs from human workers is true.  It is also true that these jobs are generally repetitive jobs, monotonous and often hazardous to human workers.  Is it wrong then to replace humans with robots in these cases?” (Ethical Issues, 2008).  If there are still enough jobs left for the humans and as long as the robots are not causing thousands of humans to completely lose their jobs then it is not wrong.  “A more detailed answer lies in the progress and development of countries as well as advances in science and technology” (Ethical Issues, 2008).  Many of the wealthier countries have allowed the science and technology of robotics to advance.  The people in these countries are also advancing their intellect.  The need for human workers in factories is decreasing and even the uneducated humans are becoming wealthier and are not as willing to work in factories.

Now manufacturers have a few options to consider when running their factories.  One option is to use robots to work in the factories instead of using humans.  This option is good because it reduces cost and is more efficient.  However, in order to keep the people happy another approach could be to use migrant workers to work in semi-automated factories.  This keeps the people happy but causes social and financial difficulties.  The most common approach is to combine to two options above and “move the factory to a low income country AND employ robot workers.  In this scenario, yes, human workers lose out all around…So the real question is how to obtain a balance between using the development of technology without causing undue hardship?” (Ethical Issues, 2008).

Robots are comparable to computers because they can both be valuable tools in our everyday and working lives.  Robots are taking over more of the cyclic, hazardous and time consuming tasks so that we can spend our time more valuably.  For example, “Provided the costs are low, a farmer can employ agricultural robots that till and seed the land, do the weeding and harvest the crops” (Ethical Issues, 2008).  If robots could run on solar energy, it would be even better.  Also, in the food industry, robots are cleaner and more humane butchers than humans.  When it comes to pollution, robots can clean up substantial amounts of waste on the land and in the water.  They can even reforest the land.  In the home, robots have already begun to help with the house cleaning and chores.  The iRobot Roomba is a vacuum cleaning robot that vacuums a household with little input from humans.  If more robots are created to do household cleaning and chores, humans will have more time for leisure activities (Ethical Issues, 2008).

In hospitals robots can assist in laboratories and operating rooms.  For example, robots can distribute medicines, do cleaning work, and even act like receptionists.  At Aizu Central Hospital in Aizu-Wakamatsu, Japan, an android receptionist and two porters work together with humans.  The receptionist robot welcomes patients and answers questions that they might have, and the two porters can carry luggage and take patients to their rooms or other destinations in the hospital (The Future, 2006).  Robots are also able to do basic surgical procedures.  “The possibility of robots working at a micro precision scale may even make them more suitable for these procedures” (Ethical Issues, 2008).  According to a study by the University of Maryland, since robotic surgeons make “a smaller incision, patients recovered faster.  They were out of the hospital faster, had fewer complications, and the blood vessels were more likely to stay open” (Blankenhorn, 2008).  In fact, robots can be manufactured to do all the things that we, as humans, do not want to do for any reason.  Is it ethical to allow robots to do all of the things humans do not want to do? Where would that leave the humans? Without jobs, and with only leisure activities to do, how are humans going to make money to pay for their leisure activities? If robots are used as workers are they also going to be paid for their work?” (Ethical Issues, 2008).

A new and astonishing use of robots is also being researched.  David Levy made a statement saying, “There’s a trend of robots becoming more human-like in appearance and coming more in contact with humans” (Choi, 2007).  At first, robots were used impersonally.  They were used in factories where they helped build automobiles, in offices to deliver mail, or to show visitors around museums.  Now, robots are being used more affectionately.  For example, toys like Sony’s Aibo robot dog, or Tyco’s Tickle Me Elmo, or digital pets like Bandai’s Tamagotchi are loved and enjoyed by children.  Because of the affection created by these robots, Levy created a theory.  “In his thesis, ‘Intimate Relationships with Artificial Partners,’ Levy conjectures that robots will become so human-like in appearance, function and personality that many people will fall in love with them, have sex with them and even marry them.  ‘It may sound a little weird, but it isn’t,’ Levy said.  ‘Love and sex with robots are inevitable’” (Choi, 2007).  Robots are truly becoming more like humans.  A robot named Dexter has even taught itself to walk.

“Dexter took its first tentative steps only a few days after it first discovered how to stand upright.  Dexter’s designers say their robot differs from commercially available predecessors because it can learn from its mistakes” (Walking Robot, 2007).  Is it ethical for humans to have a relationship with robots? People are likely chose robots over humans to have relationships with.  A robot partner could be programmed to be the perfect mate for a human so that disagreements between the two would be minimal or not existent at all.  However, a relationship between humans and robots is prone to be treated with some hostility as relationships between the same sexes were treated at first.

Cybernetics: The Next Evolution of Mankind

“I am C-3PO, Human-Cyborg Relations”

–C-3PO (Anthony Daniels), Star Wars

In the medical field, cybernetic prosthesis asks humans to replace one or more parts of their body with robotics.  “A highly dexterous, bio-inspired artificial hand and sensory system that could provide patients with active feeling, is being developed by a European project” (Cybernetic Hand, 2005).  The Cyberhand project intends to go beyond what humans can imagine in prosthesis.  The project plans to hardwire this hand into the nervous system.  This action will allow “sensory feedback from the hand to reach the brain, and instructions to come from the brain to control the hand, at least in part” (Cybernetic Hand, 2005).  Is allowing a robotic hand to be wired to the brain ethical? The idea seems to be a fantastic medical breakthrough but the humans who use the Cyberhand are going to be part robot.  Is there a limit on how far humans should be able to go when replacing a body part or enhancing a body part with robotics?

It will soon be possible to enhance the human brain with electronic “plug-ins” or even by genetic enhancement.  “What will this mean for the future of humanity? This was the theme of a recent Neuroscience in Context meeting in Berlin, Germany, where anthropologists, technologists, neurologists, archaeologists and philosophers met to consider the implications of this next stage of human brain development” (Boosting Brainpower, 2009).  Could the brain enhancement further widen the gap between the social statuses of the human race or even make people super human in their intellect? “Onto the Ethical Issues discussed in the article, most are fairly basic.  Starting with human dignity, referring to comments made by Dietrich Birnbacher, a philosopher at the University of Düsseldorf in Germany: One potential problem arises from altering what we consider to be “normal”: the dangers are similar to the social pressure to conform to idealised forms of beauty, physique or sporting ability that we see today.  People without enhancement could come to see themselves as failures, have lower self-esteem or even be discriminated against by those whose brains have been enhanced, Birnbacher says” (Boosting Brainpower, 2009).

The American Heritage Dictionary defines a “cyborg” which is short for “cybernetic organism” as, “a human who has certain physiological processes aided or controlled by mechanical or electronic devices” (Ask a Scientist, 2008) According to this definition, thousands of cyborgs live among us right now.  “Anyone who has a pacemaker to promote a normal heartbeat, a prosthetic leg with electronic motors, or wears a hearing aid could be considered a cyborg.  While many of these prostheses are designed to replace lost abilities, others are designed to enhance ones that already work” (Ask a Scientist, 2008).  A company called Cyberkinetics recently received approval from the government to experiment with neural prosthesis which would permit humans with severe paralysis to send commands to a computer using only their thoughts.  “If this technology works, then people who aren’t paralyzed might also be able to use it to supplement their normal abilities.  The advanced cyborgs of the future—some of whom may be elected to government—may simply be regular humans with biological implants that give them super-human abilities” (Ask a Scientist, 2008).

Telemedicine and robotics serve the ethical principle in that they expand the amount of practitioners of many medical disciplines to make their services available in areas they cannot possibly reach in person.  Robotics can thus diminish the lack of medical specialists in underserved regions and countries.  However, there is the risk that these robotics may aggravate relocation of medical specialists from low-resource areas, by allowing them means to serve the countries or areas they leave, by electronic and robotic technologies.  “In its 1999 statement on telemedicine, the World Medical Association emphasizes that regardless of the telemedicine system under which the physician is operating, the principles of medical ethics globally binding upon the medical profession must never be compromised” (Dickens & Cook, 2006).  These include such matters as “ensuring confidentiality, reliability of equipment, the offering of opinions only when possessing necessary information, and contemporaneous record-keeping” (Dickens & Cook, 2006).  Can robots treat patients in an ethical manner? Are robots going to ever have opinions? These questions are hard to answer because robots can be made to treat patients but they will be lacking the human aspects that medical specialists need to have.

Open Source: Advancement Through Collaboration

“In a world without walls, who needs Gates and Windows?”

–Scott McNealy on Microsoft

A long-standing, but quickly-growing debate in the world of business information technology is that of using open source versus closed source software.  Open source generally denotes software that is freely available to acquire, distribute, modify, and adapt depending on the end-users’ needs.  This concept, however, is not restricted solely to software.  A broader definition would be one that includes any sort of technology in which the end-user has free (as in “freedom”) access to the products’ source material (Wikipedia, 2009).

The most fundamental ethical issue behind open source is a question of ownership.  When one creates a new piece of software, that developer, traditionally, has had a Lockean private property sense of ownership over it wherein only the developer has access to the information and controls all aspects of where and how the product is distributed.  This Draconian view stifles innovation and discourages criticism and peer review which is so key to advancing technology.  With the open source model, technology is subject to immediate review and feedback to create better products faster and more in line with the end-users’ needs.

In the economy of the 21st century, information is pivotal to all advancement.  Unfortunately, information itself is impossible to put a fair price on and impossible to keep value in (Velazquez, 2006).  Once an individual has obtained that information, it is useless–as well as morally bankrupt–to keep it to oneself, excepting very rare instances.  Companies attempt to control this information by making their products closed–refusing to provide details regarding the nature of their software–and protecting those secrets at any cost.  Some even go so far as to take legal action for another entity reverse-engineering a product in order to comply with certain interface standards (Spinello, 1997).  The open source model attempts to put a moral imperative to share information–in a utilitarian sense that all parties receive a net gain through technological advancement–into the minds of software developers.

Driving the argument toward open-source development is the idea of interface standards, whereby multiple developers can produce for one infrastructure and consumers benefit the most through pure competition versus a top-down approach to development where one entity controls who will develop for a particular platform.  In the 1980’s, Apple had a technologically-superior product in their Macintosh computer,  but sales slumped in the wake of the open standard IBM PC which allowed for multiple operating systems and a plethora of software titles to be developed faster and distributed more easily than the Macintosh.  Microsoft continued to ensure this market dominance by allowing certain parts of their Windows OS source code to be available (in the form of “libraries” that supplemented the closed-source behemoth) for developers to freely adapt their products to the mushrooming interface standard.  Apple, since their 2001 renaissance, has still embraced a closed-source model for all their developing, especially for their “killer app,” the iPhone, and it may again prove to be their Achilles heel as open source giant Google’s Android platform is gaining serious momentum, poised to topple the de-facto king of the mobile computing market (Roth, 2008).

In addition to providing market analysts with interesting article-fodder, the interface standard debate has started to appear in the academic arena as specialists debate the need for a universal standard by which newcomers to the IT field can learn during their postsecondary education and not have to be re-taught a new, proprietary system every time they change employers.  This kind of redundancy inhibits productivity and wastes valuable resources which can be better allocated to support and improvement roles (Chua, 2005).  Also in the business IT field, is the question of using unlicensed software.  Often times, because of monopolistic forces brought on by closed-source products (notably Microsoft Windows and Office), companies adopt these software suites as their standard, but are forced to pay unfair prices in order to legally use them.  This is where open source products can certainly make a difference.  Custom derivations of the Linux operating system can be produced for little to no cost, for example, and distributed across a corporate infrastructure while open source productivity suites (a la and Google Documents)–even though they may not be as “pretty” and as their commercial, closed source counterparts–can provide all the functions a company needs to communicate internally and externally.

Quite possibly the most poignant argument between open and closed source information technology lies in the nature of security.  As stated, information is power in our society and, while the majority of (non-personal) information should inherently be free, certain kinds of privileged, personal information should be kept confidential.  This information includes financial records, medical records, personal identification numbers, and so-forth.  To keep this information secure, certain protocols have to be adapted.  In the world of closed source software, as is evidenced by Microsoft Windows’s track record for gaping security flaws and proliferation of malware, this is not so easily done.  The problem again lies in the lack of available peer review and transparency with regard to its methods of storing and transmitting information.  The idea of “security through obscurity” neglects any thought of a malicious individual breaking through the closed source barriers and only confounds the problem by preventing concerned developers from immediately identifying, diagnosing, and fixing the problem.  In the open source world, however, malware is almost unheard of as any security flaws are nearly immediately recognised through peer review and repaired just as fast.  Thus, security is maintained through transparency and collaboration rather than through walls and litigation (Chua, 2005).

The open source philosophy transcends mere software development and can easily permeate every aspect of our society by encouraging a utilitarian idea of fairness that supports Adam Smith’s “invisible hand.”  The spirit of cooperation and the spirit of competition work together to push technology forward–just like a spirit of openness and freedom allowed the Western world to triumph over the closed, walled-off Soviet empire during the Cold War.  There is no technology in existence that did not have its origins in a previous idea or design and there should not be legal barriers to continuing this practice.  Technological advancement is inherently organic; it evolves just as species in the wild do, and there should be no hindrance to this effect (Wikipedia, 2009).

Works Cited

Ask a Scientist(2008, November 27).  Retrieved November 14, 2009

Assault Drone. (2009, September 27). Retrieved November 16, 2009

Blankenhorn, Dana.  (2008, April 28).  Study Calls Robot the Better Surgeon.  Retrieved November 14, 2009 from ZDNet Healthcare

Boosting Brainpower(2009, May 14).  Retrieved November 14, 2009

Choi, Charles Q.  (2007, October 12).  Sex and Marriage With Robots? It Could Happen. Retrieved November 14, 2009 from MSNBC

Chua, Sacha.  (2005, January 4).  Ethical Issues in Open Source [Web log message].

Contemporary Genetics – Dna, Genomics, And The New Ethical Dilemmas(2009).  Retrieved November 20, 2009

Cybernetic Hand Prosthesis is Under Development(2005, December 12).  Retrieved November 14, 2009

Dickens, Bernard, & Cook, Rebecca J.  (2006).  Legal and Ethical Issues in Telemedicine and Robotics.  International Journal of Gynecology and Obstetrics, 94, 73-78.

Ethical Issues Concerning Robots and Android Humanoids.  (2008, June 5).  Retrieved November 14, 2009

Glass Cockpit. in Wikipedia.  Retrieved November 22, 2009

Lazou, Chris.  (2002, July 22).  Ethical Issues – Genetic Engineering.  Retrieved November 20, 2009, from Primeur Weekly website

Open Source.  (n.d.).  in Wikipedia.  Retrieved November 22, 2009

Roth, Daniel.  (2008, June 23).  Google’s Open Source Android OS Will Free the Wireless Web. Retrieved November 22, 2009 from Wired

Spinello, Richard A.  (1997).  Software Compatibility and Reverse Engineering.  In Richard A. Spinello, Case Studies in Information and Computer Ethics.  (pp.  142-145).  Upper Saddle River, NJ: Prentice Hall.

The Future is Here(2006, November 5).  Retrieved November 14, 2009

Unmanned Aerial Vehicle. (n.d.).  in Wikipedia.  Retrieved November 22, 2009

Velazquez, Manuel G.  (2006).  Business Ethics: Concepts and Cases, Sixth Edition.  Upper Saddle River, NJ: Pearson Education.

Walking Robot Steps Up The Pace(2007, March 2).  Retrieved November 14, 2009

Waltz, David L. (1996).  Artificial Intelligence: Realizing the Ultimate Promises of Computing. Retrieved November 22, 2009


Backup Files On Schedule With CrashPlan

If you need a simple backup scheduler, give Code 42’s CrashPlan a try. CrashPlan is available for Windows, Linux, and OSX and allows file backups to local, networked, and off-site locations with a simple, easy-to-use setup.

Download and install CrashPlan Free to each computer you want to backup and one the machine you will use as a backup server. You can have any number of machines connected to your “cloud” with the only limitation being the available space on the server. I have it backing up my Macbook Pro and VCR to an external hard drive connected to the VCR. These backups are also mirrored in an encrypted folder on a computer at my office across town.

CREDIT: Code42

Cloud backup storage is also available from CrashPlan for a nominal fee, but with off-site storage being as easy as connecting your work computer, I don’t see much need for it.

MSI Motherboards Are a Bargain

When I began planning for the VCR project, I made a trip to my long-time, brick-and-mortar computer parts purveyor Micro Center to shop for components. The motherboard is obviously one of the most important components you can purchase, since it will determine all the other parts you can install. My biggest determining factor, though, is price.

At roughly $60, the MSI H81ME33 Intel mobo is a fantastic bargain that offers support for the latest Intel Core processors, USB 3.0, 4K UHD video as well as some niceties like a metric fucktonne of USB ports, a simple BIOS screen with mouse support, a one-click overclock function, and a simple BIOS updater all in a space-saving mATX form factor.

The mobo is finicky with Linux, requiring a little jiggery-pokery and breath-holding while it pre-boots, but it works like a champ with Windows. My only real hitch is that it doesn’t enjoy USB peripherals like DVD-ROM drives and some wireless keyboards, but it usually yields to its human overlord after a nominal delay.

Support for MSI motherboards is self-directed, so you’re going to need to have some decent Google ninja skills if you run into a problem. The nice thing, though, is that the website is easy enough to navigate and the basic support documents are quickly located.

There are proprietary drivers available for all the on-board components, and MSI provides a few utilities that one can customise their system with, though I prefer not to use them.