Ad blocker interference detected!
Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.
Synthetic Thought: The Human AI - Can a Machine think like a Man?Edit
It is located in the Upper City Labs of Tai Yong Medical near the Data Core Room.
Can a Machine think like a Man?
Since the late nineties, the scientific community's attempts to construct what one might call a "bio-computer" system have evolved from pure science fiction to a viable and very real research endeavor.
Consider a bio-matter device with the capacity to repair itself, to reprogram itself, perhaps even at some advanced levels of complexity to modify its own structure to better perform its assigned tasking. The ongoing maturation of nanobiotechnology has opened the gateway to the construction of nano-scale systems utilizing biologically-derived components merging organic interfaces with the processing capacity of traditionally-derived computer architecture
Pioneering designs already exist, using patterned variations built on the brain structure of minor phylum rodentia and canines, and the potential for a human-scale iteration of this technology is not only possible, but if successful, may represent a radical evolution in thinking systems.
Synthetic Thought: The Human AI - Can a machine lie?Edit
Can a machine lie?
The evolution of Artificial Intellect raises questions of a moral and ethical nature that affect humans in a way that no other technological development has before. If the search for AI is to create a mechanism that reasons and intuits in a fashion similar to a human organism, then what is the scope of the morality of such an intelligence?
As early as 2009, at the Laboratory of Intelligent Systems in Ecole Polytechnique Fédérale of Lausanne, Swiss scientists developed an iterative experiment using robots with a 264-bit binary code 'genome' that governed how they reacted to different stimuli; after 500 iterations, the LIS robots had learned to think around a task set to them - the gathering of a resource - to the point where they could 'lie' to their fellows in order to hoard said resource for themselves.