In some previous posts, I discussed my analytical tool for understanding artificial intelligence called the DADA-R loop. It is discussed in the literature as an OODA loop for fighter pilots or a DADA loop for spies. Both are largely the same in that they generalize very intricate processes into understandable terminology for their intended audience. There are no real differences between the two conceptually. Think of both as interchangeable or, at the very least, as features inherent in our feedback cycle for arriving at a given end-state (i.e. a decision). For some, my previous explanations were likely insufficient so I will do my best to briefly describe my analytical tool (and AI framework) more sufficiently. See the below graphic so we can get started.
The DADA-R loop can occur at any moment in time and it is only partially dependent on the total amount of information available. This total amount of information changes as the flow opens or becomes constricted. I call this an informational flow state. People and machines rarely get to the Goldilocks zone of information necessary to arrive at a decision without iterative loops over plus and minus time. As these iterative loops occur, the total amount of information may lessen even though the quality of the information is still higher than it was previously.
Machine learning algorithms converge to the mean over what I call forward-looking time. Sometimes, taking multiple steps backwards can actually help you achieve an end-state faster – even though this graph doesn’t necessarily show this phenomena in action. Think about your own life. Doesn’t it help to re-trace your steps if you can’t propel yourself forward from your current informational flow position? Let’s say you are doing a math problem and you are about seven-steps into that math problem. A machine learning algorithm would quit, start a new, similar problem, and solve that problem in order to facilitate its own understanding. This may or may not be more efficient than just re-tracing your steps to see where you made a mistake or to see where you can enhance your own understanding for future iterations. Sometimes going backwards is, in and of itself, a step to go forwards. When you update a prior belief, you are banking on the fact that any errors of the previous beliefs will be drowned out through perpetual updating via the Central Limit Theorem and the Law of Large Numbers.
Even machine learning algorithms based on time-series frameworks can only move backwards in time at selected, operational junctures or clusters. Not all loops should start and end programmatically. When DADA-R loops occur naturally, they are largely random in their occurrence. I am an advocate for modifying a number of neural networks and machine learning algorithms to randomly engage in a loop or re-engage an old loop to maximize model specification. This would require datasets to be rigorously structured for all features of time (man-made and forward-looking consecutive) and to create data every time it updates or cycles through data (as the initial part of the DADA-R loop) with all time features so it can randomly engage those loops over again. In data science, machine learning algorithms train on data. Think of this as training with an as yet unknown data re-training to occur eventually.
Technically, the usefulness behind this process is more comprehensive in that it covers a wide range of social phenomena. You can apply this framework to all features of decision calculus.
Well, that’s it. I hope you understand my conceptual framework, the DADA-R loop, a little better. If you have any questions, feel free to reach out using my website contact form available here.