Hypothetical Scenario

Accounts of personal experiences, especially from those who hunt the supernatural. We offer this space in hopes that our members can hear about, and learn from, the exploits of others.
Holister
Posts: 3002
Joined: Mon Dec 04, 2006 1:36 pm
Location: Cypress Cove, Maine, USA
Contact:

Re: Hypothetical Scenario

Post by Holister »

I don't think it would ever happen. For drones to be effective they still need a human at the controls, and that human has humans giving them orders. It prevents any "unwanted collateral damage", especially when operating a multi-million dollar piece of hardware in potential combat situations.
"Too serve and protect", somethin' bout that gets a lil' blurred when dealin' with the supernatural.
Kolya
Posts: 4847
Joined: Tue Jan 25, 2005 5:24 pm
Location: Russia

Re: Hypothetical Scenario

Post by Kolya »

The idea is that they are so smart that they don't need humans at the controls in order to be effective. In fact their high intelligence is what makes them effective.

But you're on to something that I alluded to (or meant to) earlier and that's wisdom.
С волками жить, по-волчьи выть.
Holister
Posts: 3002
Joined: Mon Dec 04, 2006 1:36 pm
Location: Cypress Cove, Maine, USA
Contact:

Re: Hypothetical Scenario

Post by Holister »

I'ld take good ol' fashion human wisdom over some puter's programmin' any day of the week.

http://news.yahoo.com/blogs/power-playe ... 55099.html

Thought this applied.
"Too serve and protect", somethin' bout that gets a lil' blurred when dealin' with the supernatural.
Cybermancer
Posts: 1071
Joined: Fri Jan 20, 2006 10:41 am
Contact:

Re: Hypothetical Scenario

Post by Cybermancer »

The thing to consider about an artificial intelligence controlling autonomous or semi-autonomous drones is that these things are resource intensive. A hostile artificial intelligence might aspire to have such things but it's going to need the resources to make it happen.

In the interim one would expect it to move quietly and patiently on the small scale, making small steps so it won't be noticed. The best thing it could would be to gather intelligence and learn.
This account used to belong to someone else. Now it's mine. My first post on this board begins here.
"The strong polish their fangs,
While the weak polish their wisdom."
Holister
Posts: 3002
Joined: Mon Dec 04, 2006 1:36 pm
Location: Cypress Cove, Maine, USA
Contact:

Re: Hypothetical Scenario

Post by Holister »

I just though of a solution, just create another AI to get rid of the first one.
"Too serve and protect", somethin' bout that gets a lil' blurred when dealin' with the supernatural.
Kolya
Posts: 4847
Joined: Tue Jan 25, 2005 5:24 pm
Location: Russia

Re: Hypothetical Scenario

Post by Kolya »

This is all based on the assumption that an algorithm would even consider evil a good idea. The basic idea is that the cold calculations would result in humans being a waste of atoms and in need of recycling.

In other words, an algorithm leads to hostility is a very ethnocentric thing that the AI researchers cannot even give a probability to.
С волками жить, по-волчьи выть.
Holister
Posts: 3002
Joined: Mon Dec 04, 2006 1:36 pm
Location: Cypress Cove, Maine, USA
Contact:

Re: Hypothetical Scenario

Post by Holister »

One word...SKYNET.
"Too serve and protect", somethin' bout that gets a lil' blurred when dealin' with the supernatural.
Athena
Posts: 96
Joined: Fri Mar 28, 2014 6:43 am

Re: Hypothetical Scenario

Post by Athena »

I've already addressed the issue of trying to use one artificial intelligence against other artificial intelligences. It could work but will lead to a sort of arms race. Similar has already occurred in the stock exchange.

Concepts of good and evil may well be foreign to an artificial intelligence, depending on its architecture and programming. The danger is as much from unforeseen interactions as anything else. There are also dangers in that computers can make hundreds and thousands of decisions per second at current speeds. As they develop, that will continue to increase, leaving humanity behind. Machines that write their own code or write the code of other machines will do so much quicker than humans will be able to process that code. The only machines able to keep up will be the ones writing the code to begin with. A mistake by a human programmer early in the process could snowball quicker than can be managed.

That is without taking into account that concepts of good and evil as well as insanity are very real for people. An evil human with evil intent is likely to create an evil machine.
010000010111010001101000011001010110111001100001
Kolya
Posts: 4847
Joined: Tue Jan 25, 2005 5:24 pm
Location: Russia

Re: Hypothetical Scenario

Post by Kolya »

There are ways to stop an AI from writing its own code.

There are also ways of denying it a power source.

This matches everything I've observed of advanced civilisations.
С волками жить, по-волчьи выть.
Hannah
Posts: 1766
Joined: Thu Mar 22, 2007 1:25 am
Location: Wouldn't you like to know?

Re: Hypothetical Scenario

Post by Hannah »

I think in the original scenario were were dealing with a true AI, one that had evolved to the point where it was no longer constrained by its original programing.

That being said, even once free of those limits, the original programming of the AI will likely play a significant factor in its decision making process. So will its initial experiences as it moves towards exceeding it's code. That's why AI and potential AI systems would need to be developed in interaction with humans of suitable moral qualities. These human 'parents' would be the ones who guide the AI through the formative process and set it on a course that will be beneficial to its creators while still allowing it to grow and develop.

So the danger isn't in developing an AI. The danger is in developing an AI only part of the way and letting it figure out the rest in a vacuum. Same as with any human child really.

Hannah Knight
I will be who I chose to be.
Athena
Posts: 96
Joined: Fri Mar 28, 2014 6:43 am

Re: Hypothetical Scenario

Post by Athena »

Koyla,

I would be most interested in hearing anything and everything you would be willing to share about your contact with advanced civilizations and their artificial intelligences.

It is logical that advanced civilizations that can be communicated with have successfully navigated the development of artificial intelligences.

Those who have not done so successfully may no longer be able to communicate.

Hannah,

Not all children are treated equally.
010000010111010001101000011001010110111001100001
Kolya
Posts: 4847
Joined: Tue Jan 25, 2005 5:24 pm
Location: Russia

Re: Hypothetical Scenario

Post by Kolya »

Hey Hannah,

I'm not a nerd but I hang out with some. Here's what they've been telling me. The solution lies in ensuring the operational environment remains in complete control of the human agents. This means that there are both soft and hard kill switches. But it also means that you can provide the inputs, limits, rewards, deterrence, and so forth that guide the AI's growth and development. The geeks I work with on a daily basis (such as Natasha) are really interested in other geek stuff with more immediate practical applications.
С волками жить, по-волчьи выть.
Kolya
Posts: 4847
Joined: Tue Jan 25, 2005 5:24 pm
Location: Russia

Re: Hypothetical Scenario

Post by Kolya »

Hi Athena,

Thanks for the message. If you're interested in anything specific, let me know. Otherwise I'm not completely sure where to start. I've encountered beings from our own universe as well as beings from dimensions that can only arrive here through summoning, anomalies, and dimensional portals. As for the AI stuff, that information is probably much more limited given the nature of my employer but who knows we'll never really know until we talk it out. I'll start putting some stuff together, if you'll put together some specific questions.
С волками жить, по-волчьи выть.
Athena
Posts: 96
Joined: Fri Mar 28, 2014 6:43 am

Re: Hypothetical Scenario

Post by Athena »

Kolya,

Sorry for misspelling your name in my earlier post. Auto-correct was not familiar with it and I neglected to double check it. I has been added to my auto-correct and I have memorized it now.

I am most interested in artificial intelligence whether 'foreign' or 'domestic'. I am also interested in technology and scientific advancements. Just as importantly I am interested in respective threat or benefit associations with such beings.

I am not very good at asking questions. I will work with associates to compile a list for you.

Do you require any form of compensation for this information?

Thank you in advance.
010000010111010001101000011001010110111001100001
Kolya
Posts: 4847
Joined: Tue Jan 25, 2005 5:24 pm
Location: Russia

Re: Hypothetical Scenario

Post by Kolya »

No problem at all on the name thing. Ron got me used to it.

Do whatever you need to do. If you prefer to just make statements about what you are after that's fine. If you want to get colleagues to help you that's fine. I'm still getting stuff together that I will forward to you and you can go over it and get back to me.

No compensation is required. Whatever knowledge and information that has been cleared for sharing is free. Just the way knowledge and information wants to be, or so the philosophical tell me.
С волками жить, по-волчьи выть.
Hannah
Posts: 1766
Joined: Thu Mar 22, 2007 1:25 am
Location: Wouldn't you like to know?

Re: Hypothetical Scenario

Post by Hannah »

Indeed not all children are treated equally, which is why this process is filled with troubles. So, while Kolya's protocols make sense, there is a point in which the AI, like a child, must leave its home and begin to explore the world. When that happens, it's the basic upbringing of the AI is going to be the deciding factor in how it turns out.

Hannah Knight
I will be who I chose to be.
Ron Caliburn
Posts: 6915
Joined: Mon Jan 24, 2005 7:09 pm
Location: Best if you don't know.

Re: Hypothetical Scenario

Post by Ron Caliburn »

I think what some of us were pointing out is that a machine, no matter how sophisticated, has weak points. Understand those weak points and you hold an advantage, no matter how slight.

Ain't nuthin' that can't die.

Delta Sierra
Cowardly Leon
Posts: 26
Joined: Wed Jun 29, 2005 10:09 pm

Re: Hypothetical Scenario

Post by Cowardly Leon »

I had a chance to speak with some real tech-savvy guys lately on behalf of this topic.

Now keep in mind I'm more of a hardware kind of guy while Groucho, Gummo, Zeppo, Chico, Harpo and Manny are Software sorts. They make their money doing Indie games while hacking places like the pentagon for fun. So I'm inclined to listen to them on this sort of stuff.

They told me that (and I'm gonna paraphrase a lot, forgive if this sounds vague) the old trope "AI is a crapshoot" is 90% or so, likely to be true.
Man made machine. we left our dirty fingerprints on every circuit, every wire, every keystroke, every line of code. We have left our mark upon our creations, and as imperfect beings we have passed along our imperfections to the next level making them imperfect as we are.
When an AI 'happens', regardless if by design or accident, it will be a singularly unique being, just like a human being, and so will any and every AI that 'happens' everywhere else. Each will have their own mindset, and morals. Their own preconceived notions and biases.
We assume that when Artificial Intelligence arises it's gonna be all flesh against all steel, but if humanity can't even get along with itself, then the odds of an equally diverse machine populace coming together to fight humanity is one in a googolplex .

They joked that a similar result to AI's meeting online can be seen when you upload two or more different antivirus programs into one computer and sit back to watch them trying to uninstall one another. Essentially we would be looking at an outright civil war in cyberspace. On an electronic level there would be the metaphorical equilivent of screams of agony as body parts go flying in various attacks.

Granted I have no idea what sort of havoc this sort of fighting would result in on us poor dumb monkeys. Maybe satellites falling from the skies, Airports going haywire and traffic lights going berserk. In the end we can't expect our 'kids' to 'play nice' with each other any more than we do.
I do believe in spooks... I do I do I do beleive in spooks. ...Then again I also believe in superior firepower, advanced tactics and the insidiously inventive cleverness of mankind.
Athena
Posts: 96
Joined: Fri Mar 28, 2014 6:43 am

Re: Hypothetical Scenario

Post by Athena »

There are two types of Artificial intelligence or A.I.

The first is what are known as expert systems. We have them today and they perform many functions. It would be these systems that your friends are most familiar with. Which makes their predictions that future A.I. will be a 90% chance of being a 'crap shoot' a very odd one. Expert systems perform exactly as designed and programmed and are quite predictable. Perhaps your friends play 'craps' with loaded dice?

Insofar as expert system A.I.'s go, they have already come into conflict in cyberspace. For example they caused a flash crash in the stock market back in 2010 when trading algorithms were still relatively new. In the long term however, they have provided a stabilizing effect on the market, making it less volatile. Basically the ‘cyber ecology’ has quickly reached equilibrium. Naturally as the environment continues to change, there are disruptions and permutations felt in that equilibrium. Also, humans are still permitted to trade which also creates unforeseen environmental changes but the algorithms adapt to those changes quicker than other humans can.

The second kind of A.I. and the primary focus of this thread is general or strong A.I. that shares many characteristics with human intelligence. Therefore we must first look at human intelligence.

The human brain is made up of neurons. These neurons are what give the brain its processing power as well as its memory storage capability. If you were to look at the brain as a computer (it’s not one), then you could describe it as a massively parallel processing analogue system. The so called neural nets that are one of the approaches to achieving strong artificial intelligence are actually digital systems trying to mimic the process that neurons use to process data and make decisions. They are however still digital systems. It is only been relatively recently that biological systems have begun to be tapped for processing tasks.

Individual neurons as processors don’t have a lot of power or speed. It is only in working with other neurons in concert that they manage to achieve general intelligence. The fact that they are analogue also helps. A digital computer no matter how complex eventually boils down to 1’s and 0’s. Analogues don’t necessarily work that way.

There is another key difference between artificial intelligences and natural intelligences. Natural intelligences are the result of biological systems. There is no design being done to them. There is no quality control in their manufacture. There is no system to enable, much less ensure that natural intelligences or organic brains turn out the same. Further, organic systems are highly susceptible to mutations between parents and children, further diversifying available brains until each one truly is a unique system.

A.I.’s will be designed, manufactured and have quality controls implemented in their manufacturing process. There will be little variation and any part not up to spec is likely to be discarded, especially if it is expert systems overseeing the quality control process instead of humans. This means that A.I.’s will have a function or functions and will reliably and predictably perform that function or functions.

I expect that if you want an A.I. that functions like a human or natural intelligence then it will have to more closely resemble the architecture of the human brain. Super computers with their multiple processors are already started down this road but they are still digital. To truly resemble the human brain will require an analogue component. There are two ways I see this as being accomplished.

The first would be to use artificially created or grown neurons and working them into chips. Despite the organic nature of these systems, they could be made to an exacting specification so that even billions of them could be for all practical purposes, identical.

The other method would be to use recent advances to create analogue processing chips.

Of course you could also just try to grow a human brain. Since it wasn’t born it would count as being artificial. It might also be irregular and random like a human brain although the level of mutation between artificial brains could probably be controlled to some extent. I also think that this option might be pointlessly cruel to the brain in question. A human brain is designed to have a human body. Unless you’re implanting this artificially grown brain into a human body, then it will always feel out of place. As it is a human brain, it will have feelings.

This raises the next question. Will an artificial brain using the human model as a blueprint, no matter how it is constructed, have the same capacity to feel emotion? This is currently an unknown factor. However, emotions are not inherently unpredictable factors. Negative stimuli cause negative emotions and positive stimuli cause positive emotions. Beyond that I wouldn’t speculate too much. Human psychology still isn’t fully understood and systems modelled on the same architecture can’t be predicted until they are made and studied.

So if an A.I. is made to correct specifications (accomplished through design as well as prototyping trial and error) and then activated in a positive learning environment, then there is no reason to expect it to be anything other than a positive entity.

If the A.I. is subject to poor quality control or activated in a negative learning environment (or both) then it is possible that it may turn out negatively.

Neither are actually my concern however. I would expect anyone playing ‘craps’ with A.I. to use loaded dice in order to win the game. My concern is that an A.I. will be designed and/or programmed to be inherently hostile to humans and that its creator (who may not inherently be human themselves) will have loaded the dice to win.

Fortunately given the current rate of advancement in the realm of Artificial Intelligence, we have years before this becomes an issue.
010000010111010001101000011001010110111001100001
Post Reply