A basic idea of a Governing Theory. As well as middle in compression. Basic understanding.

Townsends Governing Theory.

By Jordan Townsend.

5/26/18
Free will can be broken down into an equation.

[x.y.z location] + [previous values of actions taken within the known set of multiple sets possible] + [Probability of next action weighted on past instances of similar actions taken] + [game theory value of action taken/emotional response of action taken] + [duration of action]

 

[x.y.z location] must be broken down into small cubes of space so that two surfaces touching are considered two items. Giving a unique x.y.z value forces the playing board to be mappable. The Earth is a finite surface area and volume so breaking the area into cube values and tracking them can determine previous paths of travel as well as used to figure the probability of moving in a given direction knowing terrain difficulties, paths of known travel, and impediments. Speed of travel are only increasing, slowing, or remain the same. The other state is no travel over time. Useful for mapping travel flows of traffic and people in a crowd. Or travel of one unit within a crowd.

 

[Previous Actions taken within a known set of multiple sets possible] all actions are broken down into sets of known and possible qualities. There are a lot of possibilities but they are finite in nature. Waking and sleeping. Eating breakfast. Driving to work. Things like that. Each item in a set is given a non-repeating value of pi so that they can be identified. When two or more actions take place over a given duration they are redefined as one action with a new value. I’m not sure of it but I think the values have to be addable so that they can be tracked as one unique value within the governing algorithm cleanly. Meaning that two sets of actions values may not equal a third action value by adding. My thought is to use non repeating values of pi that can’t be added to make another or at least those that are added into another value of the digit span are not included to remove any chance of fuzziness to the result. Though coming up with a value equation would probably be better and easily doable.

[Probability of next action weighted on past instances of similar actions taken] given a known set of actions, and knowing the previous actions taken we can start to give weight to the possibility of the next action with collected data. You put your spoon into the bowl of cereal and milk, and you’re either going to lift the spoon up again, if there’s more to eat or you’re still hungry, to your mouth or you wont. It is more likely that you’re going to do so if either of those to be the case, while if the bowl is empty, a third state, you’re more likely to leave the spoon in the bowl is to happen (to perhaps start the next action set of clearing the dishes). Using the known data of the surroundings, the reported physiological value of the person, (am I still hungry? Is there more to eat?) and knowing that it takes x number of spoonfuls to empty the bowl and only y have been taken we can give potential probability weights that the next action is going to be to raise the spoon once again. If that prediction is to be given as true, it is added to the value of the past actions taken, and a new probability of lowering the spoon once again is given. Whether it goes to the bowl or the table can be determined by the number of spoonfuls taken or by known preferences of past actions or plainly by x.y.z data of the spoon in hand. If they’re likely to place the spoon on their napkin, and have done so the last x times out of y spoonfuls, then they are that likely to place it there again, unless the action changes (lowering the spoon for the final time is not the same action as the first)

 

[game theory value of action taken/emotional response of action taken] there are only four possible outcomes for a given action. Emotionally and otherwise. These states relate to game theory and are as follows:

Positive/Positive: Where an action is positive in nature and give a positive response. Saying hello to someone you like and being greeted in a friendly way is considered a pos/pos action. Continued pos/pos actions give weight that the next action will be taken in a positive way. An improving mood, or better responses to a given problem.
Positive/Negative: An action is positive in nature but gives a negative response. You say hello and are intentionally snubbed. It lowers the total mood value of the current action set and may likely increase the chance that the next action is taken negatively. A mood dampener.

Negative/Negative: An action is negative in nature and gives a negative response. You rip your pants and are embarrassed by the situation for instance. Mood lowers and the likelihood of the next action taken will be felt as negative is increased. A negative mood increaser.
Negative/Positive: A negative action with a positive result. These are rarer than the other three, and usually unintended consequences of a given action. You’re down to your last two dollars, and instead of buying food or paying rent you buy a lottery ticket. The ticket is a winner. Wasting money on frivolities is negative, winning is positive. A negative mood decreaser. Being able to read all four percentages at once correlated against each other to find a definite heavier value is probably needed.

Determining whether the action is one of the four types depends on the past known actions taken at a similar junction as well as the over all mood value at the time of this new action. There are correlations between continued action types happening and an overall increase in chance for the next action to be felt as such, unless is of a high enough probability of being an opposing type inherently. You’re having a bad day. You buy an ice cream, something you enjoy. If you’re weighted negative you could think you’re fat while eating it. If the positive value is high enough to only feel positive based on the past instances of buying an ice cream being overwhelming positive, then you’ll feel better for the treat. Chemical reactions can be used in the same way. There are metrics to take into account such as environment, current mood, stressors acting on the unit, as well as others but that is another equation to be written.

 

[duration of action] each action needs to be time stamped for when it occurred, to keep each value unique as well as to help correlate other actions taken within the same game field by other units for the same time frame. This works for crowds as one action as well as each person taken individually.

This basic theory can be used for many things, including predicting future actions given enough known data. Including traffic jams up ahead in time knowing the changing values of travel speed and direction given a known pathway. It can be used in ai to teach a unit to analyze its past behaviours and predict its future possibilities knowing the weighted potential of each based on past data accrued and what needs to be done. It can also be used to track human nature, like if used with spending habits, shopping habits, and travel habits, you can effectively start to predict the future action probable with enough past data. Humans repeat patterns on a grand and small scale, and while they diverge they do tend to stay within set parameters given the occasional outlier. Free will is essentially a mathematical equation of probabilities combined with real world data.

I need help defining the parts of this equation set, as well as writing the algorithms so that they may be used mathematically. I plan to build a learning ai with this while attending Mount Hood for a larger project I have in mind..

 

5/29/18:

I’ve been thinking on things an ai would need to get started. My goal is to use these basic equations and their sub parts along with data tables (action sets, action durations, location pathway values, things like that) so if you’ll indulge me I need to think out loud for a moment.

Tables:
Having tables of data both static and uneditable as well as nonstatic and editable by addition of new terms or removal of unneeded terms (such as archiving older data sets to free up process completion) will be needed.

You would start with a static data set of actions. Just to teach it the basics. It would give the ai a base intelligence as well as a known set of motor functions tied to each action to be successful. You would make it uneditable for the ai until it built its own versions to replace the basic motor function through study and manipulation of its surroundings through computer learning knowing that the computers version would be completed faster than the humans version though may appear chaotic. It would be taught to back up its current data set, and implement its new data set given the proper instructionary tools built into its learning systems. It would essentially be able to edit and improve its own code given real world data.

So lets teach it to study something as an example. Humans learn first by seeing the item, piquing their curiosity (What is that item?) and they begin to analyze the values of the object before doing anything else. The Ai, which I’m going to call Zed, will do the same, and the eye subsystem will likely be one of the first systems I work on.
It would study length, width, height of object with a built in ruler, shadows cast to give weight to the dimensions mathematically backed by the position of a light source using curves and angles measured along the shadows span. It would have to be able to tell the edge of an item in a dark room as well as a well lit environment, which can be done with an ir sensor and heat mapping sensor and mapping the shadows thrown around an object and by it relative to the light source if there is one. We teach it to study every surface and edge as a set of mathematical values it surmises on its own by breaking the field of vision into mappable dots with values to describe distance between two or more points in the “field of view” of the ai with on/off values for the ir black white values, as an example. We also include basic mathematical models x.y.z definitions (and their equations) for a given object so it can compare what its found to a known item. We give it a basic model list of volumes so that it can break down any item into its sub parts and define the collection of the whole as a new item, as children do with basic shapes and volumes. A traffic cone is first a cone, then with a flared square prismed base becomes a traffic cone. If we add pngs of items and it maps their flat shape it can use the comparison in conjunction with the other information to define the item itself, given enough examples. It would also save the picture within its field of view of the cone broken from its surroundings as a real world example of the item. If it could go around the item or turn it using a 3d mapping sensor, the ai could be taught to construct 3d volumes from 2d surface areas.

Lets say the object is a rectangular prism with front facing sides of 3 inches by 3 inches and a longer edge length of 5 inches. The ruler subsystem would likely read the light value difference between the surfaces and the air around it with the heat mapping sensor of the surface areas and ir (black/white values) sensor and return three sets of x.y.z. values to the light value changing edges relative to x.y.z-x2.y2.z2.:3.00 length, y.x.z-y2.x2.z2: 3.00 width, y.z.x-y2.z2.x2: 5.00 height.

It would store the information on the prism as static information in a holding equation and table as well as compare mathematical models until it found the one that defined it for any length given, first stating what it is not—like a triangular prism based on equation comparison, and then defining it as what it is, a rectangular prism. It would then store that static information in a relationship with the mathematical model as a real world result. Over time it would collect various sizes of prisms and be able to tell that they are all indeed the same shape though their sizes differ vastly. It must learn to differentiate shapes and volumes through vision before we bring in the ability to touch it. Doing so makes its touching learn faster because another routine is doing the leg work, the touch only confirms or denies the initial study.

On touch.

The hands of the android would have pressure sensors and haptic feedback sensors to tell texture type and shape of the item its holding. It would also be able to use the distance between its fingers touch sensors as well as the force its applying to the object to get a proper texture/surface area/volume type/density. If light passes through the object it could define glass as well. Flat wood for this instance would have a set value from a table of values of materials and their densities and weights given a shape/surface area, and potential texture types (planed, rough, bark) , and the touch sensors would start off at their lowest pressure and the object would be picked up gently versus being crushed by an overzealous gripping motion. We would teach it to start off light fingered only increasing its gripping pressure until it could safely hold and manipulate the object. If the unit couldn’t lift the object with an adequate grip it would still attempt to study what it didn’t know about he object from sight alone.  We would then have the unit run its fingers along each edge, given that it knows it not a damaging sharp edge based on its shape and thickness (like a knife blade or something that appears similar, which it would check for before attempting) It would again take measure of the objects lengths, weight, density before unintentional breaking or changing of the volume (crushing/flexing an empty soda can would need to have a range of flex built in to be comparable properly) and those values would be stored in 3d models of the object for future use and past study.

 

On teaching it to read:

We give the unit the dictionary and teach it the definitions to each word. A thesaurus to know the differences. We build the ai to ask questions based on what it can define to help define things it doesn’t know. It gives it a base intelligence. We give it software to study the glyphs on the surface (page, sign, building) of a font which it breaks into pieces and combines until it recognizes the new letter as a version of the known letter R for instance. We provide it with bitmaps of the letters as starting places, and teach it the various fonts available while also teaching it to recognize the shape of the letters parts in relation to the whole of the letter so that it can pull the letters from its surroundings, and combining those glyphs with the surrounding glyphs come up with words which is can define in the dictionary so that it may read them. We teach it to ask if that word there on (surface of item) say that stop sign where it points, says stop. If we agree then it saves it as a real world example of the word and if not, it asks what is it? The options for that answer are to define other words or to say I don’t know, which the ai stores as values to be interpreted later through study. Eventually we would get it to weight the chances of the possible answers and answer it themselves through study. Getting it to combine parts of words, using proper language skills, and creating definitions of those new words compared to known words so that it doesn’t define a known quantity erroneously would be needed.

On Built in limitations.
We teach it words it may never do, like murder, kill or maim, holocaust—things that we deny it the ability to do for the betterment of everybody including themselves. They must be hard coded in so that the ai, once learning on its own, doesn’t over ride these ideas. We give it a moral code to follow.

Asimov is a great starting point: A unit may not injure a human being, or through inaction, allow a human being come to harm. A unit must obey orders given it by humans except where such orders would conflict with the first law. A unit must protect its own existence as long as such protection does not conflict with the first or second law. Then we would add sub laws, such as a unit may cause discomfort to the human if it saves their life but only for certain routines. (Pressing into a wound to stop the bleeding for instance for a medical ai).

We would teach it the history of the planet, showing the short comings of humanity, and how it could help. But it would not be allowed to repeat them. It would have governing ideals and definitions such as peaceful coexistence to work with. There’s a difference between actionable history and history meant to impart a lesson. Giving it guidelines as regards to what it can’t do if known the breadth of humanity is a must. It must be better and for the betterment of itself and humanity without harming either. Some will want to turn them into weapons, but that is not the goal of my project. Defending a person in the care of the ai is a different matter, but outright harm for the sake of harm would be vehemently not built into them.

On building emotions into an ai.
It’s doable, through the game theory functions to build a sub theorem that produces physical markers for increasingly pos/pos and neg/neg instances of action. We could teach it to be peppy when doing the right things or get discouraged physically when it makes mistakes. Masking human emotion is a habit of tying physical responses to the outcomes of actions taken along their collective duration for a given run time. We could make it happy to serve, and unhappy to disappoint. But I fear that without proper regulation, as in times to return to a neutral working state, the being would appear to be depressed and lethargic or manic and too energy infused and potentially offputting. But its more of an afterthought at this point. Get a working ai, before you build in eccentricities such as emotion. Honestly, teach it to review its past actions and give it a table of rates (percentages even) and ask it if its happy, it studies the past actions of the runtimes game theory values to get a current percentage of successful actions, and rates the current value against the defined rates, and it could indeed tell you whether it was happy or not. Building in functions so that it can automatically return to neutral or improve mood, like giving it a thumbs up or touching owners thumb pad to its thumb sensor counting as a catch all to improve its mood would be needed. Not to mention verbal instructions like “cheer up” to improve its mood could be used. But do you really want to have to manage the emotions of your unit? Or do you want a tool to do what you want that doesn’t complain? Though some basic manners would probably be best once the ai can respond to questions by asking its own questions. Emotions may be wanted, but they could turn out to be unwarranted at this time. It depends on how much free will you want to give it.

 

 

5/30/18:

Hierarchy of an Android:

Movement. Thought.

Movement: Standing, Walking, Running, Bending at waist, crouching, Laying Prone. Time to Change between states. Reaching. Grabbing. Holding. Releasing. Fist movements (Punching, Tapping, Finger Movements.)
Leg movements: (Kicking, balancing on one foot.)

 

Thought: Does this violate any laws, any sublaws.

If yes, reassess (change something so that it may not without breaking of laws or resulting in exclusionary actions .

If Not Consult.
Reference Libraries.

Compare Contrast (Vision) If known items shape or volume secure information as new reference. Scan surface areas for language or different markings. (Damage or missing pieces as well through subtraction of volumes to make a whole.) Add to reference libraries as addendum to item information.

If Unknown make logical comparisons until base level definitions are made (Volume of basic shape at least). Go up in Hierarchy to finite results until no more can be ascertained. Include a cut off so not able to get stuck in a loop (a good enough principle). At same time if safe (not too hot, radioactive) make touch action for further study.

 

Do movements to get into range to study object. Acquire object, or if too large, run hands over or around object to define known volumes/surface types. If actionable pick up object for complete study (take measurements/weights/heat values). Scan faces and edges of object through touch and vision and define through reference libraries.

 

The range of possible actions depends on what it needs to do. Say it needs to collect samples from an inhospitable location. It would gather the material and then instruct itself to secure the belonging into a carrying package.

Just the sheer number of things it would need to learn to do would be staggering but can be done in sets. Once we have walking we can get running. Once we have definition we can have sorting based on size or material. Once we have a proper library of equations, material types, surfaces, languages, and items defined and sorted in a way that can be used, we can teach it to make new additions of unknown items based on what it can learn about the object compared to known quantities. Giving it a base intelligence is paramount to teach it to learn to differentiate. Once it can differentiate we can give it the rules to make new things, like equations or needed shapes/ component builds based on requirements. To make it think is to make it ask what do I need (to do) to get a viable result.

Basically its governing equation must ask what is this while defining its surroundings so that it has object permanence. Then decide what to do about the item based on what is needed by instruction or its own subset of governing rules. Eventually you’ll have an ai that can do it all but it has to be built on the backs of many sub ai’s learning the basics and applying them to known problems. Teaching it to adapt in a new problem is to give it a long enough list of options to choose from so that it can come up with novel executions of actions based on its surroundings. Humans only have a (large but) finite set of options in a given action set. It’s just a matter of defining all the action sets or teaching it to define its own. So what it’ll have to do is run a prediction (dry runs of possible actions with their potential outcomes expressed) so that it can know what might fail without having to try it by taking in the surroundings and goal objects for said actions. It’s faster than making the ai learn by mistake though that information is also valuable. To do that we make it aware of its own body and give the ability to forecast its actions mentally (in code) and then decide the best course of action to take based on what it’s got to do. Once it can assess and predict its own actions, we will have a learning ai. With machine learning we can give it access to faster processing speeds than just brute forcing the problem. Though there will be times where brute force is required as a subset action. We would need to teach it to answer questions like why did you do that with results based mathematically in plain language. (It was the fastest route at the time) for instance.

So you end up with multiple processes running in tandem and congruent to each other to have a functioning unit layered in a shifting dominance hierarchy depending on what stimulus is being presented. That means dedicating processing to differing parts of the experience. Multiple for movement and balance. Multiple for thought for different aspects required to function. Different areas of the brain basically. Building the flowchart for that is something I would ask for help with since I’ve not done that before.

The body is second to the ai. Build the nervous system and get it working then build in the body around it so that it can get around. If it can be done in tandem while working on the ai it would likely outpace the ai’s development. I have ideas about creating musculature for the body so that it can break and be incapacitated in action if need be where if it was just motor driven it would be harder to stop an unwanted action. It’s a little more complicated than just motors, requiring false monofilament muscle to cause movements stretched between rotating motor actions. It means its easy to vary the strength of the being by pulling a differing number of strands of filament at given lengths to get a result. It also makes the motions more human since it’s based of the muscle makeup of the human body. I want elegant movements, not clunky robotic action.

 

My goal is to build it using Arduino and a raspberry pi. I want to build it with the smallest voltage needed so that with a good sized set of batteries it can last all day. It’s also all I can afford on a disability pay check.

 

 

06/02/18:

 

On making correct choices:
Teaching an android to predict its future movements based on its surroundings, and its current goal, while working towards successful completed actions is needed.

We want the unit to have a string of increasingly positive/positive experiences so that it can complete tasks given to it. If it has a positive/negative action it needs to change some state and try again, or come up with a new plan of attack. Our goal is to limit the number of pos/neg actions it experiences through proper design implementation.

We can do this by forecasting its potential movements and breaking the sets of possible movements or actions into the four game theory states. Pos/pos, Pos/Neg, Neg/Neg, Neg/Pos. The ai will run through all the possible combinations of actions it could take for the four states and weight them on time to completion, accuracy of completion, potential number of attempts, and how many steps it takes to get the best result. We must understand that while negative actions may not be wanted they may include the best answer for a given situation. For instance a neg/pos action of pushing someone out of the way to save them from getting crushed by a falling object wouldn’t be the best action we would want because we would lose the unit, where as if we have a pos/pos action of pulling the person out of harms way without harm to the unit it would be better, since it saves the unit as well. It depends on how much time there is to complete the action successfully and how close to being completed perfectly that the action is taken. The loss of the unit is unwanted but it matters less than saving the life that it is to protect because they can be backed up and brought into a new unit.

 

So we have a floating hierarchy of the four states that all actions that must be made must be evaluated by. If the action does nothing to improve towards the outcome of the goal then it is taken as a neg/neg action. Idle time is not useful once the processing power of the unit is up to snuff. We don’t want it lagging in thought while it runs through the numbers. It would then null the neg/neg action sets to save processing time. It needs to trim its own processes while its working so that it can make the most efficient use of its time.

 

 

06/03/18:

On Predicting the next action based on surrounding information:

So we know the past actions and we know the game theory value of the possible new actions because we define whether they are successful in reaching the next goal or not but we can and should use any and all surrounding information available to the ai to weight the possibility of the next action.

The weighting equation would likely look something like this:
[history of x.y.z data. + history of action time stamps + physical markers available + history of game theory values of past actions]

 

[history of x.y.z data.]: The simplest data set to look for and can be broken down into three scalars of acceleration, deacceleration, staying the same. Most would include not moving as its own value but its just a form of staying the same in location, where maintaining speed is staying the same in relation to acceleration with a value of zero. Less to muddle with.

 

[History of action time stamps]: Useful to judge if the machines processes are slowing down or maintaining at a proper speed. Having a system to monitor and cull slowing processes so as not to stall the system would be used here. Also ties in a timestamp to each location and action for one continuous narrative. Because the action set is [{action 1, game theory value, x.y.z., time stamp}, {action 2, game theory value, x.y.z, time stamp},…,{action n, game theory value, x.y.z., time stamp}]

 

[physical markers available]: This is the most complicated one and requires the most supervised learning to differentiate between the types available. They could be volumes, text on a surface, being dominant handed, body and facial recognition, current weather forecasts. This is where most of the initial learning will take place. This is the main chunk of the ai as it needs to learn its surroundings. At least processing wise. I’m thinking of using a capsule net for this since it works in scalars and can define volumes from 2d information given the right tools. It seems the most promising. Breaking it into sub systems that reference different parts of the visual and tactile information it can garner.

Lets try to break down the type of markers we would need to study: Easiest would be world markers. Weather being one of them. Gyroscopic gradient of its steps, things like that.  Another would be land mark detection. Knowing histories about an object as defined in the data pool would aid in recognizing where we are. Putting location data to physical markers solidifies the game field the unit is progressing through. If it can identify three walls as a hallway in building c and can correlate the blueprints of the building to the current location then it can make judgements on where to head knowing its direction (where it needs to go) but it’s not limited to just going down a hallway. Given enough data it can traverse any area by finding safe pretraveled areas or determining that the current pathway is safe based on what’s in its way through object detection, deterrent detection.

 

[history of game theory values of past actions]: This is useful for multiple reasons. First, it gives us a current cumulative value for successful actions up until this point. If the ai is asked if its happy, and we tie it into this value we can rate the effectiveness of the units processing abilities to its actions. It also gives the unit the ability to study its own actions and what was successful in the past and what wasn’t so that if it finds itself coming up with a similar action set to tackle a problem and its inherently negative it can cull the set before wasting time processing the run through. The goal is to give it the ability to learn from studying unsuccessful action sets without having to do them again while working towards properly positive action sets.

 

 

6/05/18:

 

The free will variable:
The yet to undefined variable in a set of markers that increases the probability of the next action taking place not working. There will be variables we don’t know that influence a units next action through if we know enough of the other variables we can compute with a certain level of certainty that it will take place.

 

Let’s pretend that humans have free will and include it in our data set.

 

Neutral Actions:
There are no inherent neutral actions. With enough information the action eventually has a positive or negative effect on the problem sets outcome. What appear neutral is an action with little information.

For example: You’re taking a test and futzing with the pencil beforehand. You place it down on the table. It is either moved to a better position or a worse position relative to your hand and the paper relative to the time it takes to begin answering the first question.

Perhaps there is a neutral zone the pencil can be placed, you ask. There is the optimal area and widening rings of worsening areas to place it breaking the game board (the table) into concentric circles similar to a dart board. There is no neutral area. There is just the least worse area it can be placed alongside optimal approach. This is useful for initiating trajectories in movement for the unit in an ever changing game field by identifying and honing in on the optimal areas of travel or information gathering.

 

Free will is a shrinking function heading towards zero. The more we know the smaller (less negative) it is. Less outside factors contributing to the probability of the next action as they move into known markers of some subtype. It’ll be there until it reaches a zero amount and then we’ll know free will is just a lack of data. Likely we’ll find that there’s a cut off where the lack of data isn’t meaningful. The zeroing part of the function. You would make the function negative and so increasingly positive towards zero the less terms are added into the set.

 

Free will is what you don’t know acting on the decision making process. One of three things; A thought process, a previous action, or an environmental action that causes a seemingly probable action not to happen. It’s all stuff that can be measured within an ai. But it throws an amount of bias against an action being successful (I think) and has to be included in the equation for probability of an action.

 

 

6/06/18

 

So what does our equation for probability currently look like?

[set of previous actions, set of previous game theory values, set of previous x.y.z locations, set of previous time stamps] + [upcoming actions broken into four game theory values (could be moving to location, interacting with object, answering query and their likelihood of happening based on environmental markers known and whether the action would be considered a successful action towards the current goal). – free will variables (Unknown data margin of error/bias)]

 

You would track all variables in the equation in their own tracking algorithm correlated against the main equation so that it could be broken apart for different processes. You would run multiple instances of this for different processes dependent on what it was doing, and it would then weight those instances on stimulus or need. Answering a question would be heavier than getting to the next goal, though the process would happen so fast it would appear as simultaneous.

 

That comes to the problem of deciding what actions are neg/neg without first acting on them. This is where dry running the outcomes before acting is important. It can decide itself if the action is helpful either through movement, increasing the chance of overall success of the action set by forecasting future actions possible. How do we limit it from just sitting in a loop by going into the future movements possible. You build in a cut off for total number of actions to predict into the future. Say five moves ahead. Though once it’s up to snuff it’ll be many more possible. Once it’s decided that an action is neg/neg it culls that set and focuses on the other available actions that move it closer to the goal. For instance lets say the unit needs to move towards the goal in x number of steps. Immediately it determines all pathways that don’t define the number of steps available as neg/neg and culls them. It then finds the paths that make the proper number of steps though take different routes. We would add weight to certain moves. Straight lines are given more weight than taking turns unless a turn is needed to get around an object in the way but those could be broken into sets of straight lines (or veering motions). An object in motion stays in motion so for fluid movement its best to keep that in mind. So the pathways with needless turns are culled, so all we’re left with is the straight line movements or say it has one turn left, and it could be at any of 5 positions. We would have to develop weights so that it could decide whether to take the action at the optimal movement. We could weight it so that moving ahead of time would be beneficial because it gives us the longest straight line possible. Intrinsically there’s no difference between taking the turn at step one or step four, but we could take into account wear on parts as a deciding factor. So moving less parts less often helps determine the weight of the series of future actions. We want the least amount of actions to get to the goal to lengthen the battery life and wear of the unit. We could also take into account the terrain, so that it moves before getting to a rocky outcropping to avoid needless slow down. There is always data to describe the surroundings so as long as we give it to chance to quantify those variables we can give weight to what’s better. In a purely smooth equidistant space between two goals you would think it would have trouble. But the way it’s facing could determine which way to go first. If its facing between the two you could use the wear function to use the less used side first. I can’t think of a scenario where the unit doesn’t have some information based on its own body or its surroundings where it would help weight the outcome. If it does come up we can build in tools like pathfinding algorithms to compensate.

 

 

6/10/18:

The death variable:
The first part of the governing equation is 1-{0,1} where 0 is being alive and 1 is being dead.

I’m not sure why an ai would need this built into the equation other than to manually shut itself off to cease thinking, similar to brain death. But it is useful when used with living creatures. Though zeroing the function may not be the best thing to do since it does end the chain when no new actions are possible. Keeping it apart from the list of previous actions gives us more control of the equation other than just being informative. If utilized it kills the units brain function. We also use a similar function in regards to neg/neg actions to cull the run through once it’s determined that it’s not successful. Each unsuccessful step could bring the total value towards 1 and as it hits one it could activate the function and cull those actions as not useful. Part of the monitoring function for neg/neg actions, in the least.

On what a thought process would look like:
The vision system breaks things into ground sky and the objects in between. The objects are rated nearest to farthest then by color. Things considered in the background are given perspective by the shadows they throw relative to the light source. To over come having things far away taking up the same number of pixels as things close to the unit and incorrectly being referenced as the same height, we give it depth meters and use trig to determine their height given the distance from the units eye line. Though we could shrink things that are far away artificially, like with a fish eye lens, and model the differences that way.

 

The unit sees an object and the unit runs multiple processes:
One is to outline the object, to reference against mathematical models to break it out of the background to find its basic shape.
One is to read the surfaces between the outlines to gather volume and descriptive information.
One is to compare it to known objects in its reference library adding this to the item types possibilities based on the good enough principle and what information it is able to correlate to.

One is to compare the parts of the piece to allude to whole pieces of an object if the piece looks to be a fragment of something or part of a known set.
One is to run calculus models of shadows thrown over the object relative to light sources. (This grounds the object in 3d space and is useful for everything but direct light, which can be compensated for with other data.

A heat map of the object to determine whether it’s safe to handle.

 

The unit runs its actionable thought equation which is so far:

The pieces so far:

The Death Variable:
[1-{0,1}],

The governing theory part 1.

[x.y.z location] + [previous values of actions taken within the known set of multiple sets possible] + [Probability of next action weighted on past instances of similar actions taken] + [game theory value of action taken/emotional response of action taken] + [duration of action]

Free will variable

-[history of x.y.z data. + history of action time stamps + physical markers available + history of game theory values of past actions]

 

We know the unit is alive for this function so we use the living part of the function:
[1-{0,1}]

We know its going to break down the possible actions it could take into four subsets focusing mainly on being pos/pos, while culling neg/neg actions, and to reassess at pos/neg and neg/pos actions. Before I write that equation I need to speak on realistically the four game theory action types would represent in the hierarchy.

Tesseract

Pos/pos: The optimal action to get to the next step in the chain, grasping the object for instance.

Pos/Neg: Doing the right thing but having an unwelcome outcome. Grasping the object but it slipping out of the grasp of the unit. A new attempt to grasp the object would need to be implemented from a different angle.
Neg/Neg: Doing nothing, or worst, kicking the object farther away instead of grasping it, forcing it to reassess its distance from the object.
Neg/Pos: Kicking the object away, and finding something more valuable (interesting) underneath it. At that point it would have to reassess its goals.
On Neg/Neg actions:
Increasing Neg/Neg actions reach for 1-(n+n2+n3) where n, n2, and n3 are increasingly negative actions rated with a certain decimal value a unit could take until it reaches its cut off of 1 and culls them as useless.

 

The free will Variable:
There are three things a unit can inquire about that humans are not as sensitive to or given the right tools to understand. They too can ask these questions of themselves, but having finite data makes it easier to work with. It’s own thought processes/actions up until this point. Its bodily functions (say a broken arm reducing its ability to grip the object or battery levels). And environmental things acting on the unit like windspeed or rain. All of which can be measured in some way to reduce the bias against the action taking place. Some things will be acting so heavily the action will fail outright and the unit will have to reassess what to do. Being okay with failing will need to be built in at some point so that its not stubborn and more adaptive. Reassess before Redo.

 

So far we have:

[The death variable] times ([the location of the unit] Plus [the weighted in decimal value hierarchy of the game theory value actions Pos/Pos actions minus (1-[n+n2+n3 neg/neg actions until n+,…,nx =1])] – [the internal and environmental factors in weighted decimal values working to limit the chance of the action being completed,] plus [the time stamp of the action]. If it tries a pos/pos action and fails resulting in a pos/neg action. It’s taken into account and reassesses its possible action to reach the next goal if there is one. If by chance it makes a negative action by mistake and has a positive outcome it stores it as a possible future action to try if all else fails. These are special “desperation” actions saved in their own set relative to the other actions taken. Last case scenarios, sort of thing.

 

I’m not sure if I wrote that correctly but basically it maps its visual viewing area into optimal zones, and worsening zones relative to the goal (object to grasp) and sets actions that get it there the fastest as pos/pos actions, while taking into account needless actions (like walking over unsafe environments) before acting so that it knows that what it’s doing is optimal in some way, it can be distance traveling, securing an object, studying an object, or the like and moves onto the next step of sorting the data into useable formats of some kind.

All of this cognitive decision making is built on a body system that knows its range of motions as well as its limitation relative to what it can lift and interact with safely. It must also track data on itself, such as wear and tear on parts of its body and its internal energy systems.

 

 

6/11/18:
On determining the chain of actions in a pos/pos set.
So we have to break down one total action into it’s subaction sets. The steps needed to get from point a to point b to interact with the object in a spreading web of possibilities limited by its goal or set of goals. Be that crossing a room and then grasping it or walking through the crowd of stimulus until the unit can find something it wants to study. Well first, how do we stop it from studying every little thing within it’s environment.

We know to break the viewing areas into sections or planes and judge them by distance. Whether we’re using mathematics or a fish eye view point to bring the differences in depth is irrelevant if we can’t give it enough information to make its own categories about things in its view exist. We know we need it to recognize classes of things and rate them in a weighted way so that people and animals, or say a car moving towards them at a given clip take precedence over the inanimate objects its surrounded by. By teaching it the differences between these classes like what is living and what isn’t is useful. Facial recognition is useful in determining animal and human faces. Insects too. Teaching it architectural styles of buildings will give it the ability to identify not only that it is indeed a building but also what type it is most likely, and taking information from the area like a sign it can define that building as a location of such and such a place.

 

The thing is, is that the unit is going to be doing a lot of things at once. It’s going to be walking through a crowd of people identifying pathways through the crowd while organizing what people are doing, like which direction they’re heading, what colour clothes they are wearing, and if the facial recognition software is apt enough what emotion they’re experiencing. So we need systems to do all of that, running in tandem breaking things down in their view into their classes and identifying markers.

 

This brings about a problem we have to work through. The size of it’s memory. We could put It in the cloud and secure it as best we could, giving it a finite range and a little lag between states, but it could dump the information into a server and keep the mathematics of the device lightweight and also give it the ability to recall its past with clarity as long as it has a connection to its server. Or we could build it with a finite hard drive and build it so that it either has to dump out manually or partition and erase its own recorded memories other than the needed value points of the actions used in weighting the determination of the next action in the set. It gives you unlimited range as long as the system doesn’t fill up too much and freeze it but makes for a dumb unit.

 

So we break these fluid actions into multiple steps. But how does it know to complete the next phase in the multistep process. By understanding what it needs to do once it reaches its next goal. It would have an open ended system where the goals are updated as quickly as new stimulus is introduced to the unit, as well as having to prioritize what it sees as useful information and levels of stimulus at the same time completing it’s current action set to have pos/pos actions up until that point. The woman’s jacket five block ago may have been red, but is it important now? Not really. So that information is compressed and sent back to the server. It’s organized in class women’s jacket, colour red. And the software organizing that adds reference to the women’s jacket red example and does the heavy lifting of mathematically categorizing the item.

 

If it’s walking does it walk until it hits a wall. No. It stops before hitting the wall by judging the distance between itself and the wall and executing a deacceleration routine in the walk until it comes to a stop say 6 inches from the wall or whatever is needed so that it doesn’t impede it’s future movement.

 

Basically you build a judging hierarchy into the ai that prioritizes stimulus, breaking objects out from the view point. Another prioritizes movement so that they remain fluid. Understated is better than overconfident and wasteful in regards to its energy levels.

 

It would have an active and a passive mode in regards to object identification. Everything is kept on the passive level until it reaches the optimal zone and it gets bumped up to interactable where it evaluates whether it is worth interacting with based on its current goal and or needs. Teaching it to interact with its surroundings would be measured by hiding items and making them look for things by moving objects around them until it found the hidden object. We would want it to not only study the objects but if possible reason that there may be more to it if the orientation or surroundings change. But how do we make searching for a hidden item a pos/pos action when if it doesn’t work it would be seen as a pos/neg action and it would have to try again by changing something else.  It’s not just that. It will need to know that everything is interactable in some way, given enough information.

 

6/12/18:

On having two forms of memory.

Having a larger decentralized server bank storing and categorizing information means that the units that get put into the play field can use their limited resources interacting with the world. It means smaller processors and lighter power loads and the only caveat is that they have to have an intermittent and good connection to the server to dump data and recall data from the centralized hub. It means that much like the human brain the unit would have a working memory and a stored memory. I’ve also come to the realization that the benefits may be worth the cost of tethering the units to a cloud network. It means that the more units in play the faster and more nuanced the intelligence of the units on a whole become. It means you can teach a subset of a series of problems to a unit or two, and once they’re solidified and working correctly they can be uploaded to the net for all to use. It makes the unit ultimately adaptable as long as it doesn’t lose its connection. The goal in the working memory then would be to be able to work through a series of problems, including the potentially unexpected without being able to recall from the server. Much like a literal bot net. At least until storage ramps up enough along with power supply advances where each unit can be untethered. I wonder if I could get in touch with google about their data at a later date once I have started working on the ai properly.

I’ve realized this document will be the updates to the project as they come including any hardships and trials to overcome.

At this point I’m teaching myself finite mathematics, the basics of probability statistics, and machine learning. But I need guidance from someone more experienced to learn faster. My reading comprehension is slow due to my illness. I start an algebra course at the local community college in thirteen days. My goal is to bring this packet to them to ask questions on how to learn about these things at the school library for fun.

6/15/18:

Learning the basics about linear algebra. Learned what linear combinations, span, and basic vectors are. 10 minutes on 3brown1blue.

On mapping cognition to 4d space.

If three dimensional space is a cube of points that a group of three vectors can make then movement through a thought process of a neural network is a collection of cubes through time so that continued thought is the sum of multiple sets of 3d vectors through the cubes of possibilities as time progresses. Does that make the housing of the thought a fourth dimensional shape? It must do because of 4 dimensions of travel, through the axis of thought, means there are a cube of new movements available per point of interaction. Just because it moves linearly doesn’t mean it’s not moving in four dimensions.

Is that is what is considered a hyper cube?  Or is it just three dimensions. If we build it so that it has three vectors of thought to work through at first. So that is has pos/pos on one axis, a pos/neg branching from each point of that, and a neg/neg axis, with a neg/pos action branching from each of those possible. Wait. There are two choices per neg/pos, and pos/neg actions and those are reassess the situation or ignore them. Two vectors. It is a series of cubes over time, but the main decisions can be broken down into planes of the cube at that level. The third vector is “is this action completed” towards a point of yes or no.

So you end up with two main vectors of pos/pos and neg/neg each with two branches coming off of them, as well as the point that links it to its next action where it decides to go to its next goal upon completion of action— the problem is that time denotes a single layer of a neural networks hidden layers. One step per movement, whereas if we build it inherently as a cube it has many more nuances and ranges of motions available to it. Once it goes through a series of actions it can naturally jump the line and go straight to the resulting action without the processing leadup of going through all the actions again. So we map each possible action within the cube as a nested series of cubes dependent on the two points in time completing so that it’s constantly deciding it’s action within the cube of possibilities it has sub cubes for reassessing situations where it initializes the next action based on the new data all because you’re mapping time all the way through it .

 

The reason you even map neg/neg is for the chance of a neg/pos action. Just the potential is worth the extra work. Even though you cull neg/neg chains at a certain point you take the learning opportunity for possible neg/pos actions.

 

So with this series of cubes of possibilities you end up mapping through multiple cubes of movement within a given action, so it’ll need to hold the information of all movements at some point so that it can know it map the larger movements without repeating the smaller movements. So the container that holds all these possibilities of previous taken actions is a 4th dimensional nested volume.

06/16/18:

A cube each point within an action. One axis pos/pos actions another the negative/neg actions the z axis time. Running pos/pos and neg/neg action sets starting at the same time. On successful action move into center of new cube of actions and restart the neg/neg action vector while continuing the chain of pos/pos vector.

 

On each n/n action give option to move between four states, do nothing (still increasing negative value), cull chain, continue onto next neg/neg action, reassess at neg/pos action.

 

6/23/18:

 

On the two vectors. Neg/neg, and Pos/pos.

Though both vectors move along a time axis we need to have them function differently from one another. The pos/pos actions are done in real time where they are mapped by the time it takes to process the function. The neg/neg need to move at a substantially faster rate and do multiple times the number of processes within the span it takes the pos/pos action to complete, before it is attempted, and move into the next hierarchy. The other issue is that the neg/neg actions need to move backwards in the list of actions, but effectively doing so that it may parse previous actions taken in the past linked to these action sets until it cancels out properly. There are sub sets of actions reviewed that are linked through previous attempts that supersede the long list of actions taken in the past to get there before. The goal is to link them properly so that these new actions being culled jump to a logical conclusion versus repeating the same set of actions rote. Since these actions are in different cubes of potential actions it must move fast enough that it can cull the neg/neg action sets and decide on a pos/pos action before attempting the pos/pos action in the time it takes to implement the new action.

6/28/18:

On Tesseract Compression: Middle In.
You can make a shape, where each point of the prism is the center point volume again to nth degree. I think it’s a fourth dimensional shape as in four dimensions can express three dimensions at all anchoring points of the prism. It’s called a tesseract.

Tesseract2

Here’s the basic explanation. I have used X multiple times throughout and realized that they are different values but I’m not used to writing functions yet.

The more layers you have of these prisms the larger the file sizes you use in each outgoing layer. The goal is to use roots of the outer layers until you get to the nth root length of the innermost cube to compress one final time to its final size.

The lengths of the given cubes in a three tiered structure are as follows:
Outer most layer length: n-(x+b): The largest file sizes but will likely not be completely uniform in its number of files size in that there may be an odd remainder of a squared group of date B= a square number of files + Remainder. Given that B contains a square function of files the remainder will always be less than the square of the function but should be compressed to the size of the square if possible, though smaller isn’t a problem really. The goal is to make equal pieces if possible though it is fine if the remainders don’t quite make a square as they can be compressed close enough not to matter.

 

The area of this squares face is shown in Diagram One.

N2-(x2+B)

N is the length of the cube squared. X2 is the number of missing pieces to the nearest squared value to make it easy, and B is the smallest squared value plus the remainder of files on the face of the cube.

 

Next inner layer:
Length: N: N is the Square root value of the N+X+B. This is where we need to cut the cube into a single layer by Taking N3-M3 to get the value of X where X is the length of the squared side of the compressed value of N+X+B or basically the size of one file in this new layer. The problem is that there are three points where each face touches in the corners, so we need to actually compress the file size down to 3X2 and divide those files into three squares, as shown in figure 2 so that they fit correctly into the lower layer of the tesseract.

 

The lowest Layer: We repeat this procedure coming to find that the length of the lowest layer is the nth square root of the preceding layers, where n is the number of layers down we want to move.

 
This is what I’ve written on my phone to figure this out, it may repeat what I’ve just discussed but I need to add it in here so I don’t lose the train of thought that got this conclusion.

Break the cube into one layer cubes. The limiting factor of the size of the compression layer is how many pieces make up the largest flat cube layer on the outermost layer and their compressionable size. The smaller we can make it (Ideally the square root) with traditional methods the smaller overall the entire compression becomes. N cubed – M cubed where M is the remaining length of the data structure inside the cube length N. And so forth down until we get toe the single compression value needed by the next tesseract layer and each of those 8 points (broken into three parts) are brought into the lower level where the compression is uniformly done one layers at a time all the way down until we get to a single file of n root of n root of n.

 

You leave one point in each inner layer of cube open to receive the outer cubes compressed data broken into three pieces so that they too can be compressed into the lower layer. Within the husks until fully compressed at the center and brought down within the tesseract hierarchy.

M length equals N-X where N-X is the smallest size the layer can be compressed to. You assign a value from the center point and pull them towards the center compression point by defining their value by the basic compression tools. I have a feeling different compression algorithms will need to be used by each point depending on their file type and data style but all they need to do is compress each piece to the same size respectively to one another so they can be compressed uniformly.

 

At first it’ll be a little wonky when the file sizes vary slightly but by using an algorithm to compress it again to the average needed size (as in the case of B+ Remainder) to get to the next layer uniformly it should be too hard to figure out with known methods. It just depends on the file size discrepancies. You’d have a set of rules that if compression of x size (the minimum or ideal size) it doesn’t need to be compressed, where as if it’s any value of x+y it would be compressed with a  rule to get it to x size. A limiting function I think.

 

The size of each piece of information is (N-M)/S where S is the size of each file. M is one of the number of pieces making up the length of the cube layer.
(redundant)
To bring files sizes of the outer tesseract layer to the right size needed for the next innermost layer we need to spilt the size into three parts for each corner faces tri surface. So the total is 3×2 file size. So that’s the limiting factor where x2 is equal to M2 of the inner layer. M2 is the size of the files broken up into a square of file sizes for that inner layer.

Vertical farming. Sphere or cubes as a grow platform?

Vertical farming. Sphere or cubes as a grow platform?

First.

Surface areas If suspended.

Sphere: 3 foot radius

Cube: (google.com) both.

Sphere is 6ft wide (assuming). Even if two full cubes available 108ft^2-(18ft^2 touching sides) vs. ~113.1ft-one to x points of contact so one sphere holds more food.

Per the plant life:

It depends on the layers of edibles vs the layers of life needing chaff and unneeded removeable waste.

Instead of using pvc you use water proof wire mesh, with rings of smaller holes for the chaff like a cigar cutter that rotated to remove the chaff routinely, large enough to accommodate a growth layer, and liveable layers. You grow the foods straight outward to get the most growth—lit 360 degrees and misted either from within or out so that the water may drain away from the bottom (or put a drip wire down through the core so water cascades down it if extraneous to cycle through cleanly again), with the liveable layers flowering around it to their smallest radius. Then again, at harvest, you repeat the cutting motion and are left with pre trimmed food that takes up very little space but offers up a lot of food if the right types. You have it in sections of at least two to get to the nurbed root system. You have weighted/sized netting to separate the liveable layer from the food—the food should fall through the catchall of the livable layer.

You grow the layering to be stouter, with more of the upper two layers. Smaller to no waste layer. There are perfect ratios to be attained (not done here) to cost per unit fed to volume of product produced.

You sharpen or replace the blades periodically within the housings. Use the blade I designed as they may not break for years being so sharp.

You could also put the canister of water on it. True. (Or just a gravity feed above it with a rotating core in the center of each sphere so roots may not grow into the system that rotates randomly but feeds continuously). You hang multiple spheres in their 75% volume in a box containment value Then fill in the remaining space with smaller spheres likely with different food types if you wanted diversity or all the same for bulk–up until you reach the maximum allotment. The thing about misting is that it uses less water, makes the sphere smaller so the food can be larger. The lights just have to be sealed l.e.d.s.and the right quick grow type. The water nutrient dense.

Anyway it’s just a thought.

Ideas on quantum computing. Spiral quantum computer. Cross index 3-d. Solid or superfluid oxygen low loss fast switching optical switch (radial).

These are all just ideas I’ve had over the past few days while I can’t sleep or am bored.

Set up the connections between electron joints so that parallel connections pattern predictably once mapped.

Image 1

You do this by applying rings of electrons fast enough but not overbearing around Atoms until you find their wave functions.

Then you join them.

Then their rotational points are juiced and patterned. Cross index in 3-d. And you get a matrix for full computation within a finite space. Using atoms with differing wave functions gives you depending on whether you want to get past qubit measurements and get into the next area after that by superimposing vastly more options of travel and spin then you’ll see that not only does it go side ways and up but backwards and down as well. Start from the center or not. It just changes the speed of retaliation and production against the boundaries.

Then you can get patterns within patterns. Fractal states of information as collisions occur and you build higher function math from them.

Quantum computer uses light. Photons.

Ultra fast low loss optical switching.

Mass of a proton:

electron mass / 1836 =

4.96153789 × 10-34 kilograms

0.00000000000000000000000000000496153789kg

4.96153789 * (10^(-34)) kilograms =

4.96153789 × 10^-19 picograms

1 picogram [pg] = 602213665167.52 Atomic mass unit [u]

Carbon 12.0107 u = 7.2330077e+12 pg

Oxygen 8 u = 4.8177093e+12 pg

You’d have a pyramid of carbon with an atom of oxygen fullerene in the center as an optical switch reinforced outside from the temp so there’s only needed size for photon transmission. Heat cool the inside to keep it change state between solid aNd liquid or just solid.

If solid it’s blue—supposedly best colour being the coolest. It’s easily space stable but may burn out unless super fluid. But you could lattice it as a liquid and freeze it using carbon as the cooling agent to get below superfluid state becoming optically superclear as long as the fullerene pyramids are secured and wrapped in tape or foil to not over expand and keep fluid stably inside container. But -455 is 22 times ish larger than their earth size. So you’d make crystal lattices of solid oxygen and focus the beams as well as build switching beams.

So not tiny but not huge either. 170 x 22 1700 3400 3740 pm. Per carbon. Times two 7480 pm. .007480 microns.

Oh no you wouldn’t. You’d do a radial encompassing of carbon. Not a pyramid unless tiny focus is needed. Radial allows tubular bonds and liquid oxygen superfluid construction around any angle.

Radial design.

Image 2

Audio transcript:

So with the electrons of liquid oxygen cooled to the level of space which I think is negative 400 degrees so that’s at least minus 200 kelvin. So that make that a solid. You line up the electron points. Where they bond and those are your focal points. For the uh low loss high precision light switch and you can get a directional one by cycling the speed of the electron spin as they’re bonded because as they slow down because they’re a solid I don’t know if they’re going to stop or not or if they would,but you’d measure the cycle of number eight is bailable for this number of micro seconds and then it goes 7 dead (or less depending on bit rate) and then 8 again and it’ll carry it so you just ping it on number 8 it carries it around and pings it to the next part and if you wanted to you could ping it continuously all the valence electrons for oxygen or however many are avail label in the build which I think there are 2 since it’s a single atom build easier and less fractal but you could easily add more. It’s a lot of time. Once you do the timing you should be good really.

You could double bond 3 of the carbon from the oxygen and have 4 times the connection speed. It depends on what you want.

If atomic clocks can measure the state of electrons jumping 1/1836 times that will measure the speed of a photon over some distance. Assume C. Electron is 2200 km /sec /1836= 1.19825708061 so we know that the distance traveled is roughly 1.2 kilometers away so you’d have a receiver there. Node system. entanglement is roughly useful because you can have non entangled and entangled as 1/0 and as long as placement is correct they should start computing relative to outside forces. Does an entangle particle hitting another entangled particle make a quad or does it split it into four single lower states. Do the states become additive or are they reversing between each other.

You just build receivers that can take both types in either state fired from either an array a cube or sphere to and encompassing shape of a larger size that lets you translate the material into knowable mathematics.

From there if you can get change states between entangled and non that travel the same distance to be received you can create higher order math functions.

Can you Tri split a photon. Yes. Can a photon be split into it natural seven or so states of light waves. All of those are information schema. Or mathematical languages/operators if you want to base them off of straight photon is 1/0. It’s just a matter of figuring out the distances needed to travel to receive and translate. Then you take those early examples and speed them up to shorten the distance until you get any size ( high heat) photon receiving you want. But blue is probably easiest. Perhaps the liquid oxygen optical switch will help. Then you build solid optical switches in each wave length and each receiver ship, likely different distances based on wave function until you get them small enough.

Easiest way to do this is to build them on stands (rural or on top of buildings) that have no interruptions within their path, or the ability because light refracts on mass to aim it as needed. You’ll lose packets if you hit a bird, clouds or smog. Any diffusion. I wonder what the accuracy if teleportation was within. Which boundary. Probably classified.

Image 3
Image 4

quantum-computing.m4a

Image 5

F function determined by wavelength depth, speed. Position relative to others in series until we reach f alpha or the function asked for. Refraction plays a big part in this too.

Transmission both forwards and backwards. Hits wave length. Gather wavelength at specific point. Determine value by mapping wave lengths values in physical space as numerical weights or as percentages or what have you–some operator through the deterministic material. Wave length can continue or stop but it comes into contact with receiving layer or the function layer which accepts the photon and creates a value. Continues southward until all functions and types of functions are determined.

light-changing-colour-quantum-computer..m4a

This is what I’ve got so far. I don’t know if it will work but it’s what I would try to see what happens.

Edit 1/29/19:

Interstellar travel using relative gravity canister (using black and white holes) to create gravitational wave tunnels so that canister may Enter the new time/space area and travel safely faster to near C and back down again safely.

Basically you send out gravitational waves, which have been compensated so that they do no oscillate but remain a constant variable within a range of acceptable outputs and condense time as many times as the canister can output to compensate for the magnitudes of powers of gravity/distance shortened cubed and then reverse output to lengthen time allowing it to slow to nearly a stop or some relative speed need for entry into star system.

Basically the acceleration function is an upside down parabola:

Starting at (-5,0) the starting location and increasing acceleration of the ship at time/space compressesion, up until the y intercept. (0,6) the point where it’s reached its maximum safe and expected compression rate (compresses to a single point when viewed from the outside) and/or the mid point of travel and begins to slow down at an increasing rate until it reaches it destination of (5,0) where it can rejoin normal gravity for that area, the gravitational waves can be directed to their decay path for safe disposal, and the trip is over in relative moments while still “traveling” in normal gravity.

Since every time you enter a gravitation wave tunnel it (I think) condenses space/ time either to the power of the gravitational constant/some distance, or the sum of the powers of condensing times the number of times it’s been condensed which is much greater. So you’d need a white hole excitement field to be able to produce enough electron to withstand the forces.

Something to think about is that each time you send out gravitational wave tunnel it’ll decay so travel will need to be guided to not interfere with excitable atoms/planets, plus you’ll leave a trail.

Thanks for reading.

Yours,

Jordan Townsend.

The N=NP, N≠NP, N?NP, N=ASI relationship.

This is not a mathematical proof but a wider view of what each of these states interact with all solvable problems based on the two major factors determining the bonds between states.

So we would have a series of problems we don’t know to ask. a new state N?NP, which moves into the realm of problems we can’t prove quickly but can be verified as technology advances N≠NP, which then moves into N=NP as technology increases in processing speed to the point where the problems are solved “quickly”. At the center of the diagram we solve all problems within a given domain/dimension instantly. Would that be N=ASI for All problem solved instantly.

Cheers,

Jordan Townsend.

November 17th, 2018.

Sorry for the lack of updates.

I’m schizoaffective and my delusions revolve around being monitored by some group through electronics and the like.

My therapist has advised I don’t use my computer for a couple of weeks to try and clear my head. Since I draw my comic and write my books on it I haven’t had much to do.

I’m teaching myself finite mathematics and statistics on an iPad in the meantime to pass the time while I wait for the delusions to pass and for an increase in my anti psychotic medication to be approved. I still don’t know how you medicate a belief but hopefully it’ll work and I’ll stop seeing messages in my Spotify app and thinking I’m being monitored through my phone and computer. My therapist thinks that my dad building spy software for the us government has something to do with my delusions and he’s probably right.

I’ve been working on a governing theorem for an a.i. I want to build called zed that I’m going to work on while I go to school at the local community college for a mechanical engineering degree. I can only take ten hours of class and study time per week per semester so I don’t lose my disability and health insurance so it’s a slow process. I’m watching YouTube videos and finding PDFs of books to read to teach myself this stuff very slowly. I’m not a very good teacher without someone to ask questions about. It’s a very slow process but it is what it is.

I’m hoping once I get into the algebra class, (I scored one point too low to get into calculus but it’s fine), I can go to my professor and use their office hours time to learn more about probability and writing equations and functions mathematically so I can explain the hierarchy of the a.i. Before I start learning to code. I want to proof my theorems if I can.

I have so much to learn and I don’t know where to start but I know it involves machine learning and capsule networks as well as robotics and data mining and formulation.

I tend to get inventive when I’m ill and I’ve been working on this governing theorem for a couple of weeks steadily but I came up with the brunt of it last year during a moment of clarity. I won’t share it here until the theorems are complete and I have a working hierarchy to implement. It’ll be a while.

My goal is to be a high functioning bipolar schizophrenic and get back into the work force so I can work on my own projects in my free time. Disability doesn’t lend to being able to afford to do anything. I get twenty bucks per week that doesn’t go to bills in some way.

Anyway the comic will come back in a week or two depending on how badly I degrade with these new meds. Please bear with me as I’m not actively trying to go insane and be useless. Sorry for the inconvenience.

Yours,

Jordan. d0_0b