Blog

AI and looping

In some previous posts, I discussed my analytical tool for understanding artificial intelligence called the DADA-R loop. It is discussed in the literature as an OODA loop for fighter pilots or a DADA loop for spies. Both are largely the same in that they generalize very intricate processes into understandable terminology for their intended audience. There are no real differences between the two conceptually. Think of both as interchangeable or, at the very least, as features inherent in our feedback cycle for arriving at a given end-state (i.e. a decision). For some, my previous explanations were likely insufficient so I will do my best to briefly describe my analytical tool (and AI framework) more sufficiently. See the below graphic so we can get started.

The DADA-R loop can occur at any moment in time and it is only partially dependent on the total amount of information available. This total amount of information changes as the flow opens or becomes constricted. I call this an informational flow state. People and machines rarely get to the Goldilocks zone of information necessary to arrive at a decision without iterative loops over plus and minus time. As these iterative loops occur, the total amount of information may lessen even though the quality of the information is still higher than it was previously.

Machine learning algorithms converge to the mean over what I call forward-looking time. Sometimes, taking multiple steps backwards can actually help you achieve an end-state faster – even though this graph doesn’t necessarily show this phenomena in action. Think about your own life. Doesn’t it help to re-trace your steps if you can’t propel yourself forward from your current informational flow position? Let’s say you are doing a math problem and you are about seven-steps into that math problem. A machine learning algorithm would quit, start a new, similar problem, and solve that problem in order to facilitate its own understanding. This may or may not be more efficient than just re-tracing your steps to see where you made a mistake or to see where you can enhance your own understanding for future iterations. Sometimes going backwards is, in and of itself, a step to go forwards. When you update a prior belief, you are banking on the fact that any errors of the previous beliefs will be drowned out through perpetual updating via the Central Limit Theorem and the Law of Large Numbers.

Even machine learning algorithms based on time-series frameworks can only move backwards in time at selected, operational junctures or clusters. Not all loops should start and end programmatically. When DADA-R loops occur naturally, they are largely random in their occurrence. I am an advocate for modifying a number of neural networks and machine learning algorithms to randomly engage in a loop or re-engage an old loop to maximize model specification. This would require datasets to be rigorously structured for all features of time (man-made and forward-looking consecutive) and to create data every time it updates or cycles through data (as the initial part of the DADA-R loop) with all time features so it can randomly engage those loops over again. In data science, machine learning algorithms train on data. Think of this as training with an as yet unknown data re-training to occur eventually.

Technically, the usefulness behind this process is more comprehensive in that it covers a wide range of social phenomena. You can apply this framework to all features of decision calculus.

Well, that’s it. I hope you understand my conceptual framework, the DADA-R loop, a little better. If you have any questions, feel free to reach out using my website contact form available here.

Nomers and misnomers

I often see articles on Artificial Intelligence (AI) that feature machine learning algorithms almost exclusively. In their descriptions, they use the always useful Venn diagram to describe the relationship between the two: “all forms of machine learning algorithms are AI but not all forms of AI are machine learning algorithms.” Sometimes, they even describe it as a Russian nesting doll. While I think these approaches and metaphors are useful analytical tools for novice learners, I also think they contain a ton of conceptual traps.

I want to propose my own analytical tool that is partially based on books from intelligence professionals that have written and taught as civilians – which was taken from a concept by an Air Force pilot called the OODA loop. Instead of thinking of AI as a broad category, let’s think of it as a process with a dedicated end state. Intelligence, as we know it, does not work like machine learning algorithms. In fact, virtually no forms of intelligent life operates like machine learning algorithms. First, computers can store much more data than most organic, sentient life. Second, they can access that data with much better recall and clarity. Lastly, machine learning algorithms do not have a sub-conscious – the iceberg analogy helps to understand just how deeply superficial our own explanations of intelligence are when it comes to conceptualizing intelligence without “artificiality.”

My tool may seem trite but it also seems to cut through the noise of most AI explanations. Think of AI as part of a loop. The first step in the cycle is data collection and collation. The second step in the cycle is analysis. This is usually where we fall into the trap of thinking that machine learning algorithms are AI. In reality, machine learning algorithms are more like the initial stages of a constant loop. The third step in the cycle is decision, where sentient life actually earns its bread. We like to think that decisions with data and analysis are uncomplicated. If that was true, you could train a machine learning algorithm to make decisions on when to lie and when not to based on situational context from data alone. To point a fact, intelligent life sometimes sees past data to craft analyses based on features that are difficult to operationalize because they exist strategically (i.e. survival). The fourth step in the cycle is action. Believe it or not, sometimes you need to take action with the data you have instead of waiting for more data. Human beings suffer from paralysis by over-analysis. Do we really think computers with even more accessible data do not? The last step in the cycle is the equivalent of an after action review. Learning from our mistakes means knowing that the previous loop can be refined by making decisions, taking actions, and failing or succeeding to achieve a desired outcome in order to succeed or become even more successful later. In much of the literature, this is called a DADA loop but I have modified it with the addition of a fifth category because the original concept doesn’t convey the looping, iterating, and successive nature of intelligence. Below is my shortcut for understanding AI. It also explains why machine learning algorithms aren’t even close to AI.

It is important to remember: this is an analytical tool. There are much more expansive methods to understand AI that have dedicated teams of researchers working day-in and day-out to achieve the first aware and intelligent “artificial” life form. Sometimes these turn into both substantive and philosophical debates (i.e. the Turing Test and the Chinese Room). There are already articles dedicated to clearing up the lack of consensus regarding machine learning versus AI. What I am proposing here is not entirely new – even though my approach has never been done and my analytical tool is quite unique in its application.

One of the pioneers of Machine Learning modeling, Tom Mitchell, is quoted as saying: “Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience.” While this is quotable, it begs a number of questions. For example, is updating a prior belief truly learning from experience? A prior belief can be updated in the analytical stage of a DADA-R loop without drastically changing the structure of the next iteration of the loop. Are probability distributions the same as knowing something and intelligently expressing it? Many probability distributions converge to the mean because everything stabilizes over time in accordance with the Central Limit Theorem and the Law of Large Numbers.

I promise I am not trying to call anyone out. Machine learning algorithms are brilliant. However, I should probably be quoted with the following response to over-zealous, machine learning enthusiasts: “If you want to know what you kind of like but don’t love, use a machine learning algorithm. If you want to know what you really love that you can’t live without – you know, the deep, dark parts of you below the arctic currents of your visible iceberg – ask an AI.” I don’t think this is a Venn diagram kind of situation.

The way is here

Information has been a hot topic and theoretical formulation in Political Science for a while now. At their core, these theories derive understanding of the political world from increased or decreased access to both real and perceived informational flows. When I was getting my Master’s degree, I found a wealth of arguments that examined how domestic and intrastate conflict start in as simple as a couple phases. First, an incumbent government loses its monopoly on information. This is usually exemplified in a Weber-ian way with a twist Max Weber did not entirely predict. When a government loses their monopoly on information, they start to lose their monopoly on violence. In reality, this violent monopoly does not actually exist because state-violence must be selective and strategic – which is highly contingent on informational flows. When a government is no longer selective and strategic in their violent policies, it is a signal they don’t understand their own informational environment, creating a sort of natural realization that the cycle of good governance is over. Second, legitimacy suffers when government’s do not understand their constituents. Rivals are expected to seek legitimacy for the sake of all. Unfortunately, not all rivals are altruistic and some seek profit instead of replacement in good governance. To make matters worse, poor decisions usually cascade in a downward trajectory, especially for an incumbent government. Decreased informational flows create a negative feedback loop. Many governments have gotten used to making decisions in high information and high quality informational environments. When an informational environment changes, they are usually at a massive disadvantage both bureaucratically in mobilizing their apparatus and innovatively in predicting informational changes in the future. Lastly, when conditions are ripe, a critical juncture or moment of intensity either unites a rebel group or divides it, relegating it to obscurity through splintering. Sometimes this is the only way an incumbent government actually survives in times of domestic strife.

This formulation, while basic, serves to showcase a point. Informational flow is extremely important. It has ramifications on a range of fields, including expected utility and prospect theory. Virtually all theories oriented on decision calculus stem from access to information, including strategies oriented on risk acceptance, aversion, and neutral positions. Even positive and negative payoffs contain consequences that are either known before (changing the informational flow position) or after the fact (making individuals or groups realize their informational flow path dependency). You may be asking yourself: “Why is this important?”

News has recently been forced to suffer a paradigm shift. Disinformation has been a problem since time immemorial. And yet, it usually occurs in phases so governments can ensure static informational flow environments with concerted effort. For instance, the end of the Cold War is largely seen to coincide with the collapse of the USSR. For many people, it was as if disinformation during the Cold War just went away the moment the USSR fell and the Cold War went away the moment people stopped emphasizing its existence. This worldview was shattered when more than 10 Russian nationals were indicted on charges related to conspiracy to defraud the US Government regarding their use of disinformation in an attempt to manipulate a US election. The shoe is on the other foot at the moment. Governments do not find themselves in a worse position today than technology and news organizations do in a moment in time that continues despite decreasing emphasis. News organizations, brokers of truth, find themselves in a position where they must maintain their brand of truth while simultaneously admitting we are in a post-truth world and susceptible to disinformation attempts by foreign powers or bureaucrats with a memo. The irony and chaos here is not that post-truth exists. Post-truth has also occurred in phases since time immemorial (i.e. Nietzsche’s God is Dead). The irony is that news organizations think they can continue with the business of truth despite the drastic change in their informational environment. This type of denial happens with incumbent governments that eventually collapse and are replaced exactly because they do not understand or will not admit to their changing informational flow environment.

I designed Curated News partially to help stem these effects. Who do you go to for news in a “post-truth” world when the brokers of truth were part of the manipulative structure used to disrupt your own informational environment? How can people go on Twitter to stay informed when studies show misinformation spreads through fake or real news stories used to garner a psychological effect on the social media vehicle? Unfortunately, while there aren’t many good options, Curated News is absolutely one of them. As the founder and developer, I understand the reality of our situation: changing informational flows are only a problem if you are too big too change or not innovative enough to establish a new paradigm. Necessity is the mother of invention and the need has never been greater than now. And yet, it seems news has continued business as usual and technology companies are testing new algorithmic formulas that are ten years too late.

Artificial intelligence and society

Oftentimes, Artificial Intelligence (AI) is seen as a kind of bogeyman. We invoke meta-narratives from movies like I, Robot: “AI will always interpret coded guidelines too literally or not literally enough.” Worse still, as in The Matrix or Terminator, AI will be our downfall: “Our inability to accept differences in others will eventually give way to AI’s own survival imperative – which depends on our extinction.” We are okay with robots like “Ok Google” or “Alexa” because they are not “conscious.” At the end of the day, someone, somewhere, is collecting data and the robot is just parsing that data to effect a command given to it by a human being. Ironically, human beings are extremely corruptible and AI is still largely unexplored in this area. We know, for instance, that AI can be made biased. We do not know it is unable to be objective.

I have never met a video game fan who ever thought Master Chief was better off without Cortana (there is still time). I have also never met anyone who believed in the unlimited potential of human beings to such an extent that it could not benefit from help now and then. AI is capable of giving us what we need and not what we want. That scares a lot of people. In order for AI to do that, to fill this niche, we have to relinquish control over a portion of our lives. This requires a lot of trust. The political structure of human society can never be fully divested of power and AI represents hope that something can be above it on our behalf. Researcher’s are already looking at ways to integrate AI (and not robotic “machine-learning” algorithms) into our daily lives. But, if we aren’t receptive, it will take much longer to start seeing the immediate benefits.

The key policy question here is one of intent. With the drastic increase of cyber-crime, databases used by robots can be polluted with disinformation. We are left relying on human beings to defend systems that were always easier to attack. AI is not an existential enemy or the next stage of evolution. It’s the solution to our cybersecurity problems. Wouldn’t it be better to have a personal firewall that inhibits echo chambers, screens overly destructive capitalistic practices, and defeats government censorship (completely independent of human beings) without incurring any legal liability whatsoever? Criminals, Corporations, and Governments can’t sue AI. First and foremost, they would have to admit they are trying to manipulate you. Second, they would have to cross a legal barrier: a computer program acting autonomously has legal agency through its owner’s use and installation. Wouldn’t it be great to have a system follow you wherever you go, knowing that it is tracking you privately to defend you and your family instead of for monetary or insidious gain? Wouldn’t it be great to live a little like Ironman with your own personal Jarvis? I realize I am invoking a meta-narrative while telling you that meta-narratives are bad. It is a sort of poetic irony that the only way to dispel a bad meta-narrative is to replace it with a better one. Think about it.

Showing your work

For most people, blogging is seen as a business or past-time. It provides far more utility than you might otherwise suspect. For instance, let’s say you finish High School, go to College, graduate, and eventually land yourself a job as an entry level employee. You spend 5 years at that job and decide you would like to make a switch because you’ve outgrown that position and organization. Promotions not being what they once were, you begin your switch by updating your resume. The problem: over the last 5 years, you conducted a lot of work. 5 years is a long time to spend anywhere, especially for someone relatively new to the job market. The company you worked for owes you a debt. As such, they would be totally fine with you collating your accolades and work achievements for a resume. You’ll have to remember what to write, how to write it, and what kinds of things you can and should list to advocate for your new, ideal position. This is where the trouble begins. You did not show your work as you went along throughout your employment and so you are now working from behind. This is where blogging could’ve helped. Think of a blog as a living document, with no word caps (unlike a resume), that shows who you are and what you’ve done – not only in society but in your professional life as well. If you had been blogging, even in limited detail over the last 5 years and even with only 1-2 posts a year, you would have approximately 5 x 1-2 or 5-10 written examples of your work – with date and time stamps to prove it.

Often, the companies you work for love the idea of getting your work experience out there and this is especially true if they are technology companies. They feel like a positive blog post written by an employee (or former employee) is a great recruitment and branding tool to make their company more appealing to a wider or more specific audience. I want you to think of your website as an entry point to your professional life. It’s like a decentralized version of social media. No one can track you and, ironically, you can track others using your website’s analytics. The point is: it is a complete reversal of the paradigm with some added benefits. Namely, it is a place where you can show your work.