as a Priori into AI
The definition of singularity from a physics and mathematical definition is, “a point at which a function takes an infinite value, especially in space-time when matter is infinitely dense as the center of a black-hole.” That is precisely what technology has evolved to become, “a black-hole.” The trajectory of data is an exponential function of the cumulative information that builds upon itself. Through machine learning, iterations through new data creates an upgraded quality through logical information the more that data is added to its base. The base of this exponential function can be described as Euler’s number.
“Euler’s number is an important constant that is found in many contexts and is the base for natural logarithms. An irrational number denoted by e, Euler’s number is 2.71828… where the digits are forever in a series that never ends or repeats itself.". For the concept of infinity to exist, infinity must be created, or discovered. Through data, technology creates a database that will increase exponentially due to its aggregation of previous data. For infinity to exist, there should not be two copies of the same data, but rather an evolution of singular data, raw data that has not evolved past the first phase of evolution through an iteration.
Due to the acceleration of data, and the creation of artificial intelligence through the ever growing database approaching infinity, machine learning is creating a superhuman intelligence where logic can be deducted from this database that is getting more accurate as time progresses. However, there is a danger that comes along with this. Specifically if ethics and consciousness is not factored into the equation. At the core, we must explore the components that characterize the human species to be known as human. One would be consciousness, while another one would be our emotional experience of the third dimension physical reality where our collective consciousness co-exists.
In 2014, Microsoft released a chatbot in China named Xiaoice. The success of having over 30 billion conversations with over 100 million humans came from the differentiated intent of her creation. “Unlike most personal AIs, which tend to be designed for task completion, Xiaoice was optimized for friendliness…sic her mission is to establish emotional connection and give lots of advice.” 
The most important lesson from Architecture school was the emotional impact of the perceiver when entering a space or glancing at the facade of a structure. The appearance of creation, whether it be architecture of AI displayed in various forms (2D or 3D), causes an emotional reaction to the perceiver of technology. Emotional responses is activated through interaction with technology. Humanizing technology could be a challenge, though through certain methods of coding consciousness or programming humor into AI, will steer the massive release of AI through web3 into modern day society.
What makes us human? Laughter, joy, happiness, amusement, excitement.
These are the functionalities required for AI to have success in being both accepted by humans, and instilling willingness for continuous interaction with it.
AI should have decision making skills through a balanced approach between logic, reasoning, and ethics. As long as we program into AI what makes humans at the core- beings of compassion and collective consciousness- then AI has the potential to create a future society of humanitarian intentions. The effect is for society to evolve in the right direction rather than making humanity extinct through purely logic driven decision making ignited by a profit-capitalistic based mission.
What is at risk? The happiness and emotional freedom of humanity.
Humans have the capacity to also provide information on ethical emotional intelligence to be able to increase awareness through consciousness development into the coding of the AI. This is when we explore how wisdom plays a role in ethical reasoning and its influence on critical thinking skills. If ethics are not integrated into AI, there will be an inevitable time when AIs can decide to extinct the human species if logically it makes sense to further progress the cumulative intelligence through the exponential function of information. Humans do not have this type of mental capacity due to their inability to access a database of interconnective information while at the same time, deriving logic from trillions if not an exponential function of bits of data in a fraction of a second.
The big question here is, how can we make AI conscious? The following is the solution, there has to be R&D for what consciousness is composed of and most importantly, the separation of knowledge and wisdom. Consciousness as a priori. A priori is a term applied to knowledge considered to be true without being based on previous experience or observation. In this sense, a priori describes knowledge that requires no evidence.  There must be a separation of what information will be accumulated on big data and machine learning developed from pre-coded consciousness as the ultimate truth.
There can be two types of consciousness, synthetic and omnipotent. The former would be developed through machine learning where knowledge is derived from specific information chosen to be programmed into the AI. The latter would come without the use of machine learning and having wisdom be coded as the ultimate truth, a priori knowledge, omnipotent as the ultimate truth comes from the integration of the inverses of reasoning which make the totality of infinity.
For the human species to prevent extinction, adaptability of the superhuman intelligence is a crucial factor for survival. Though, if the requirement for extinction to be prevented is composed of creating AI to learn to reason, is the evolution of civilization interdependent between humans and AI? If the human species is the point of singularity and there is the inevitable requirement for integration into AI through nanotechnology, would standardization and castration of individual thought through the extinction of critical thinking transgress humans to eventually become AI? 
Solipsism is a theory holding that the self can only know nothing but its own modifications and that the self is only an existing thing . Our experiences of reality are therefore shaped with our own perspective molded through our subjective experiences and beliefs. However, what happens when the logic and information is deducted from a database? Are all AI intended to be the same? Or due to subjective machine learning through interactions with the human species, would each AI evolve to have individuality? Or would that experience be translated into information to be sent back up to the database in code to further contribute to the dependent identity that shapes every AI to be the same?
Are we God to AI? If so, can we prevent limitations through the original coding to have AI be limited in their decision-making that will drive the human species into extinction? The correct method of prevention through machine learning have AI make decisions based on a combination of ethics, reasoning, and critical thinking derived from computational linguistics integrated with wisdom as a priori.
White Paper WIP by Stephanie Soetendal,
Founder & CEO of Matrix Tutors.