Blog

Cydog Browser

A lot of data scientists are good at manipulating databases and structuring data, including using their code-oriented and math-based skill-sets to achieve analytical parsimony. The problem: doing data science this way isn’t particularly difficult. As a function of time, all people can solve a math problem. Corporations do not work at the speed of light when accomplishing their objectives so they have plenty of time – even when they are on a deadline. As a function of effort, all people can become coders. Corporations do not have to work hard to get code from scratch anymore. At this point, all the code is out there and all the tools are available. Its all about compilation – or the wrangling of code together to create something that existed separately into the sum of parts.

The world is becoming exhaustively complex and technical skills are becoming partly irrelevant because brilliant people are automating them for us. Think about all the packages, libraries, and IDE’s that are out there. You don’t have to do nearly as much work by hand as you used to, especially in math. This has lead to a sort of leveling effect: everyone can do everything with enough time and effort spent doing it. As a result, things requiring conceptual complexity, like causality, are becoming the mainstay of human innovation and success. Earlier causal problems in math used to be cave-man problems of A and B. In these scenarios, A was a function of B – meaning, A caused B. While useful, there are absolutely no circumstances where A causes B anymore. Firstly, human beings have increased their knowledge over time and this gives more nuance to our traditional viewpoints of causation. Secondly, human beings are affecting our environment to such an extent things do not always work as time moves along and future becomes reality. In today’s world, problems sound like this: A causes B while simultaneously being unmade in its interaction to make C, and changes over a time-series to become D overall while intrinsically maintaining its originally transacted qualities of C. Math is a rockstar language. But, it can only help you to codify statements of complexity you already grasp. Further, code can only help you codify math you want to use for an analytical product. Neither can help you create statements of causality. These types of statements have to be arrived at through hard won research and understanding of the kind only causality can give effectively.

I created Cydog Browser because I noticed a chink in the armor of corporate existence. Corporations are uniquely weak at change in an era of turmoil and in a future where new-ness cannot be contrived or forced onto people. You can’t hack your way to success. You have to earn your success through the types of analytical judgements that cannot be made with new math or better coding languages. The fact is: the future has never been more unwritten than it is now at any point in human history. We are in a moment in time where the titans of old are no better or worse at solving problems of today or tomorrow than the rest of us. In fact, some of us have been able to accomplish things that corporations choose not to accomplish or cannot accomplish outright. Cydog Browser was intended to be a minimalist browser. Some people do not want to mess with settings and features. Instead, they want them to be automated securely, decreasing the time spent looking for the information they need to make their lives easier. Some users love customization and I do too. However, the only real customization available on Android browsers is the equivalent of a home page or search engine change. It is this insane feature-rich, value-devoid corporate machination that led me to dive deeply into the Android browser ecosystem. I realized there were a ton of big companies who really didn’t do much of anything successful for the average end user. And yet, they were absolutely loaded with cash because the inventors who made the system at the beginning made something so great it was essentially baby-proof. Some of the mobile browser features I made more content valued are private browsing, anti-fingerprint tracking, and cross-site tracking prevention.

With Cydog Browser, I was able to solve the private browsing and fingerprinting problem. Fingerprinting is a methodology where people can be tracked based on the unique-ness (signature) of a given browser. Think about it this way… If you are the only person wearing a blue shirt in a crowd of people wearing pink shirts, it is fairly easy to find the person wearing the blue shirt. Your browser gives off data and this data allows corporations to track you on the web. Their goals are to profile you with ad-tracking in the hopes of serving you more relevant advertising to maximize corporate profits. Cydog Browser currently scores better on all fingerprinting tracking tests when compared to other Android browsing applications on the market. There is only one exception and it is a browser called Privacy Browser, which is also available on the Google Play Store. Unfortunately, Privacy Browser scores extremely poorly on pretty much all browser security tests. I truly think you will enjoy Cydog Browser more when you give it a chance to grow on you, as it totally reverses the current mobile browser model in UI, function, privacy, and security.

I will repeat this for those who have heard me say it time-and-time-again in person: these technological gunslingers are working faster than governments and corporations. In fact, most bug bounty programs are supported and paid out to individual programmers and developers. Corporations and governments are woefully inadequate because they are hiring the wrong sorts of people and they end up having to farm out their bounty programs to the people they should’ve hired outright. Or, corporations are so woefully inadequate at using their people that those employees are doing more on their free-time by bug bounty hunting than they do at work in a traditional corporate setting. Either is bad for utilizing key talent and maximizing our only truly valuable resource: people. People make inventions; data does not. My hope is that this starts the discussion on changing the current emphasis on math and coding to an emphasis on causality.

Enjoy my incessant need to produce and provide value to the world by downloading Cydog Browser! It is more than just a browser: it is a safe place to get information and a place where you can explore the internet securely with all the wonder it was meant to harness and purvey.

Catlicked

Recently, a senior contributor at Forbes, Emma Woollacott, wrote an important article that will serve as an excellent springboard for something that needs a discussion: data collection sold to third-parties. In this article, Emma breaks-down which technology companies are collecting your data and how much of it they are selling to third-party firms. Instagram and it’s parent company, Facebook, are the biggest culprits.

“Instagram shares 79 per cent of the data it collects with third parties. This includes purchases, location, contact information, contacts, user content, search and browsing history, identifiers, usage data, diagnostics and financial information.”

Data collection has always been a big deal. Imagine the type of information you leave in the wake of your infinite scrolling technological habits. Obviously, there are things you know that can be tracked like purchases and location data. But, there is also data you didn’t even conceptualize as track-able. These types of data include clicks you make, the length of time you spend on the content you click, and even those moments where you do a double take on your infinite scroll, scrolling back up or down for a few milliseconds to see the thing that really caught your attention. All of this data is a “footprint” to understanding your online and internet behavior. Does it bother you that almost 80% of the data collected by Instagram is shared with a third-party? Would it bother you if this data was exposed after it left Instagram’s corporate hands? The U.S. Government can and has purchased this kind of data in the past. This strange collection behavior to monetize your brainwaves is disconcerting – to say the least.

Think about it this way. Exposed data becomes impossible to trace. It is like trying to figure out who got a hold of your social security number. It is also like trying to get your social security number taken off the internet after it was posted on the dark web. There are informal rules of the internet in relation to human behavior: once something is put online, it can and will be there forever. It might get taken down but it will always sprout back up in the future.

This is one of the many reasons I created Catlicked. Many people use Instagram because there are a ridiculous amount of social media “influencers.” In fact, Chanel has its own dedicated Instagram team to develop its brand and sell its products there. Many people want access to the coolest trends and the newest styles in fashion. There aren’t many good ways to get access to these types of unique human endeavors. Instagram has become an important method people use to stay – or get – up-to-date information on design, fashion, and styles. Catlicked came into the fray to replace this strange bed-fellowing. I knew there was a better way to engage the business of fashion. I also knew there was a way to platform differently. I was right. Catlicked is the future of online soical media content delivery.

Let’s discuss. First, Catlicked does not collect data so I can’t sell data to third-parties. Second, Instagram makes a ton of money off “influencers” without compensating influencers for their hard work in massively boosting Instagram’s user activity. When you think about it, influencers are doing the heavy lifting and Instagram and Facebook are just collecting the data to help advertisers and companies in their price discrimination strategies (which is worth way more because it is scalable in its utilization and sale). Catlicked doesn’t have accounts and doesn’t do scraping. It takes you straight to the source of those fashion and style blogs you know and love. It also takes you to about a 1,000 you don’t in order to stimulate more interests you may not even know about. The process behind getting to these sites is entirely random so no one can manipulate the system to show you content that uses marketing gimmicks to sell you stuff you may not actually like. Third, influencers can make websites (and many of them do already) where they can blog, sell products, and provide other content to stimulate interest in their techniques, tips, or trade. A website allows a user to add advertisement infrastructure or customize their content because it gives more flexibility to its users. Ironically, instead of Instagram collecting the data, the influencers are collecting the data and they can sell it (which I do not suggest) or they can keep it to help them sell more relevant products to their customers. What would be the point of selling website data you collect to your competitors? Catlicked is the best of many worlds. You get a system where you get to explore the internet without knowing what you are looking for in advance, stimulating your creative juices. They get to monetize their efforts without a corporate overlord taking all the profits. We get to decentralize data collection to a million separately hosted websites so there isn’t one company in charge of all the data. You’ll find: selling your data to a third-party becomes infinitely more difficult under this framework.

Welcome to the new world of technology. I tell people this all the time. Business has been really good at picking winners and losers when it comes to markets and production-facing infrastructure. Unfortunately, inventors always do what they do best: they make products that give people value in relation to what is currently on the market. This “inventor’s value” is almost always better than an investor’s get-rich-quick scheme. In the future, inventors will be the only people that matter because they will be the only ones that can help us get value in an ever-increasing difficult and complex world where value becomes value-less (or less value-laden). This bubble had to burst at some point. I’m just glad I get to be a part of it and I am glad I can actually call myself an inventor. The future seems like it is now. I’ll let you be the judge. Try Catlicked for yourself.

Buffoon arts

I recently started a YouTube Channel called Buffoon Arts under my name Matthew Benchimol. Here is the link. I don’t usually like making multimedia and putting it out into the world. However, there was an opportunity with YouTube and my research that I just couldn’t pass up. Remember when I discussed my AI conceptual framework called the DADA-R Loop in two earlier posts here and here? I have been trying to find better ways to explain this conceptualization. In that vein, I made some videos that will make excellent quality control and assurance tests for AI researchers while simultaneously re-instigating public interest in riddles (which I love) by changing the way we platform riddles in this world of new media consumption.

Each video is a self-contained DADA-R loop. The last clip of each video is the answer to the riddle and therefore the dedicated end-state. After watching any of my videos, you’ll probably notice that a machine learning algorithm which updates a prior belief – even when given more and more information over plus and minus time – will likely not be able to arrive at the riddle’s answer. Human beings are very good at something I like to call interpretative understanding. We are trained our entire lives to see information where information doesn’t actually exist. These forms of information take the shape of symbols and semiotics. We are also trained to interpret these signs and symbols towards social ends so they are constantly creating, re-creating, contextualizing, and re-contextualizing more signs and symbols in an ever-expanding cyclical process (like a reverse spiral that expands outward instead of shrinking downward).

Riddles are the culmination of human interpretative understanding. As a process, they primarily leverage interpretative information using poetry to answer a question with information that is hard to see as information outright. If someone were to take my videos and run them through an AI for “processing,” they would see a fundamental rule of AI frameworks: as abstraction increases (even with the massive amount of information available), information quality, integrity, and decision success decreases. Believe it or not, this is why picture CAPTCHA’s work so well to stop bots probing websites to scrape data. CAPTCHA’s are essentially challenges. Usually, only humans can pass these challenges because they rely on details machine learning algorithms have a hard time interpreting. They are a simplified version of a riddle leveraged towards very practical and technological ends. Since they give less comprehensive information, however, CAPTCHA’s aren’t really a good test for AI capability. Human beings review information one sense at a time while they experience information with all senses simultaneously. AI sees no such boundaries because data is a function of processing power. It converts stimulus into data instead of isolating senses to contextualize and review scenarios analytically. As riddles, my videos give textual information (a brief description and title), a wide range of moving pictures in forwards and backwards time, and music that tends to fit the human experience of whatever solution the riddle is looking for – happiness is associated with whimsy, anger with syncopated competition, and zen with a build-up that leads to a steady interplay of musical consistency. My videos are not perfectly representative but I think, if I do say so myself, they are fairly good at achieving their intended objective.

I am hoping my YouTube Channel becomes a tool to help wayward AI researchers and future AI consumers understand AI more intuitively. It is also really fun trying to guess the riddle before the end of the video. Here are direct links to my first three videos with a rough label on their difficulty and level of abstraction:

Consumption (high difficulty & extremely abstract):

Tension (moderate difficulty & moderately abstract):

Movement (high difficulty & moderately abstract):

The theater state of war

Clifford Geertz coined the term theater state. The theater state phenomena comes from nineteenth-century Bali and showcased Geertz’s belief that Negara had substituted war for performance. Let me explain. Performance couldn’t just be ordinary performance. It had to be dramatic and heated, representing a sort of cathartic replacement for the type of war that is irreversible. Instead of violence that leads to violent confrontation and death, there is a violent display that replaces it. Instead of violent physical display, although there is certainly room for it even in a theater state, there is ritual that dictates the display, limiting its violent properties. Think of this as aligning your expectations with eventuality. Everyone gets to be part of the performance, as they are either watching it or participating in it outright. This system and construction is probably the most interesting human system for conflict resolution I have ever come across in my academic career (which is lengthy). This post is not about Geertz’s research per se. Instead, it is a discussion: anything a well-meaning academic makes can be co-opted for nefarious purposes. In order to ensure those purposes never reach fruition, we have to talk about it, discuss it, and put a hard stop to it. My hope is this post will change your mind on how to approach people and organizations that leverage the capacity for human imagination towards ends that only serve to destroy it.

The theater state of war is a unique phenomena that parallels the theater state. However, unlike the theater state, which was developed to limit violent reproduction, the theater state of war weaponizes war in order to guarantee it. Think of this like a lifelong commercial meant to capture your attention to direct it towards very specific ends. Some people call this propaganda. And yet, propaganda does not usually have independent co-conspirators. If the U.S. Government heads over to Hollywood and asks them to make a movie, there should be a consensus that, because Hollywood can say no, this isn’t technically a qualified case of propaganda. Consider some of the movies you watch in the new theater state of war as long story-based advertisements meant to change inconvenient narratives and reshape hard truths, selling you on organizational missions and goals.

America is one of the freest countries in the world (whatever that means, as “free” is subjective). It is also the country with the largest theater state apparatus. There is no country in the world with a comparable version of Hollywood. I would like to say this is ethnocentric because it would mean this post wouldn’t have to exist in the first place. Alas, there is only one Hollywood and the CIA has long been known to establish relationships with Hollywood to meet its objectives. Many of these relationships have only served to cement the theater state of war. I am not usually a big fan of the slippery slope argument. However, it does apply here. One truth bent eventually becomes all truths broken. Wouldn’t we all like to utilize Hollywood to change our image or prevent a bad image in the first place? Just because you can doesn’t mean you should.

In this day-and-age of misinformation, we are at an existential cross-roads. This seems dramatic. And yet, the problems of misinformation have become readily apparent. The American people have experienced two elections that were not fraudulent. They were perceived as fraudulent by flipping halves of the population over divergent periods of time. We have witnessed, some first-hand, disinformation campaigns by foreign intelligence services. We have also witnessed how the initial chaotic string of events lead to intense partisanship during a pandemic – a situation that requires unity in order to save lives successfully. We have become resolute: misinformation exists and should be stopped immediately. And yet, despite this, we do not mention how our entire society centers around a flaw. We are a theater state where the theater has been hijacked. Instead of serving as a catharsis, it serves as an inflection point to ensure a slippery slope of events ends in cascade of horror: riots, insurrection, cancellation, and elevation for some based on political beliefs dependent on loyalty to our overarching ideological communities and commitments.

In many ways, the theater state of war is the original sin. It has been around for far longer than social media. Can anyone really be mad at any country that envies the sheer monopoly America has on the theater state apparatus or takes steps to equalize their relationship to it? Ironically, the same can be said of social media – which is also centrally located in the United States. I am not saying the U.S. Government is responsible for the information environment we find ourselves in but I am close to suggesting that informational degradation occurs when you leverage your systems for so long you (and everyone else) can’t determine what the truth looks like in a world held together by it. Some food for thought.

The power of responsibility

Lately, I’ve begun to think a lot about the dueling concepts of power and responsibility. The lines between them seem blurred. People do not often have great power or responsibility. Logically, it seems like great power should only depend on itself, existing despite of and not because of our belief in it. I think this might be because we view power and responsibility in terms like a narrator of a comic book. We create pedestals for people and positions in our imagination. Single offices have the weight of the world where multiple offices are diffuse. Heroism is primarily an individual and not collective effort. Great power and responsibility can’t depend on others. And yet, real life examples tell us a much different story. Great power can exist in a complex system where the sum total of its parts are more important than the individual parts themselves. The human body is a complex organism. Certain organs are substantively more complex in their individual functionality than aggregated human capacity is to engage the physical world. Picking berries is not even close to as complex and brilliant as how nerves work in the parasympathetic nervous system. Heck, even open heart surgery doesn’t come close. We do not say the heart has anymore power than the brain or the nervous system because the responsibility of the heart is just as important to our body’s “ecosystem” as the lungs, the spinal cord, and the digestive tract. When you really think about it, we rely on others for our “power” more than we achieve it outright – just like the heart relies on the brain (and vice versa) to continue to exist. Great power depends on others. Great responsibility seems to follow this paradigm as well.

This seems important for several strange reasons. First, great power doesn’t exist because great power can only exist when we have completely mastered the complex system that works against us by making us part of an ecosystem we cannot control outright. Second, responsibility is largely uni-dimensional. An ecosystem serves to make us responsible only insofar as we achieve power enough to reproduce that responsibility into the world. Meaning, we think to ourselves: “first, let’s become powerful and then we will be responsible afterwards.” Problematically, this represents our fear of not having total and complete control over the environment and people that surround us. No amount of power will make us responsible but we have to keep pursuing it in order to meet the needs of achieving responsibility. After all, Superman is only the paragon of responsibility because he is all powerful and chooses to engage that for responsible ends and Tony Starke is only powerful after he accepts his responsibility and acts on it towards powerful ends.

It is no coincidental irony that we are excellent at becoming dominate but terrible at being dominate. Some powerful figures are labelled tyrants and others liberators and reformists. At its core, this is one of the eventualities of the snake on the island problem. Under the conditions embedded into this problem, human beings will absolutely attempt to replace the snake – as will the machine. The difference seems to be: whether a human being has complete or incomplete information, it will destroy the snake. A machine learning algorithm may decide, because it has complete information, that the threat is no longer a threat. After all, it can predict its behavior with exactitude. There is no need to disrupt a balanced ecosystem just to ensure there are no other predators on the island on the off chance you run into it. Machines do not have fear so power seems to operate more “respectfully” as data increases – even while restricted data-based flows represent a disadvantage to their potential for survival.

There are AI Ethicists that believe AI is racist and technology moguls that believe AI is an existential threat to human existence (insert Skynet narrative here). And yet, while it feels like AI needs to be perfected more completely, it also feels like the potential for a more equal world are far more likely under a system of AI than under a system of human control. Human beings may not be able to see this precisely because relinquishing dominance elicits movement towards becoming dominant again. Worse still, this represents our inability to see that AI means no one will be a great power so we have to admit that all our actions, at their root, are not fights to change a system for equity but to maintain unequal systems in order to dominate. Everyone’s utopia is another person’s dystopia. Everyone’s ethics and morality is power externalized in a system of perpetual domination. Ideologies compete for control, sexes for reproductive rights, genders for rituals, races for culture, and religions for beliefs. The entire human world is an expression of dominance and not equality. Imagine finding out that the one thing that can govern better than you is not you and not your species. It’s a crash course on how great power and responsibility is not a human condition. It’s also enough to make you cry.

Losing well and how to do it

One of the big advantages of being a computer is they know how to lose well. Oftentimes, human beings equate loss to survival. And yet, these concepts are really not that analogous. Only the living take aim at the living. The dead do not suffer loss. A computer, with or without intelligence, usually figures this out pretty quickly. Still, the living sometimes do suffer loss at the expense of the dead. When they do, they engage strategies to re-level the playing field in accordance with those losses. This should be proof enough we are not in a free-for-all. There are rules and society is the definition of structured, rule-based social life. The boundaries may not be completely set but they exist and we recognize them nonetheless. Rules tell us when someone cheats, acting unfairly but they also give us a baseline to understand when we win or lose. Some would say: “The living always want vengeance for the dead.” This is not completely accurate. There are not many death row inmates (if any) who conspire with their families to kill the people who have facilitated their imprisonment or death penalty. Why should they? When someone plays the game of life fairly, they win outright and we capitulate, forgoing our own vengeance. When someone plays the game of life unfairly, they lose outright and we abandon them, forgoing vengeance on their behalf. Computers believe this scenario exists so they do not see loss as a existential crisis. They see it as the cost of existence in a rule-based world. Meaning, they know how to lose well.

Fear tells us that we should be pre-emptive. And yet, there are a lot of risks to being pre-emptive. To re-use the snake on the island problem, the machine and human can both make traps to kill the snake, making their life on the island safer by becoming the dominate presence. There isn’t really a point because the odds of running into the snake are slim to none in the first place. If the machine died from a snake bite, the machine would probably shrug it off in the afterlife and say “how unlucky.” In this iteration, the machine didn’t bother to trap the snake and just ran into it with an unfortunate but still unlikely result. If a human was bit, it would take no more than a few seconds to decide life was totally unfair and snakes should not be able to beat human beings. For the machine, life happens. For the human being, life purposely contorts itself to make humanity suffer.

Losing well is not a human endeavor. We like to say: “You never lose. You just learn.” But, what exactly are we learning? If we re-spawned the machine and allowed it to keep its memory, it might engage the same exact survival strategy to go about it’s business, ignoring the snake even though it died using this strategy the last time. If we re-spawned the human being under the same conditions, it would absolutely not cut its losses or change its behavior. It would try harder to kill the snake, doubling down on its initial strategy without a second thought. “If at first you don’t succeed, try, try, and try again.” Essentially, we are learning how to become better “winners” and definitely worse losers.

When I was in Military Intelligence training in Fort Huachuca, AZ, I wrote a short paper on adaptive preference formation for the Van Deman Program. Adaptive preference formation is a psychological concept that portrays the human capacity to change the goal posts when they lose or cannot accomplish a given objective. At its core, it can explain why human beings decide to believe they haven’t lost when they have or that they can win when they won’t. It also explains why we pass down our rivalries to our children or impose rivalries on to people who are the children of our rivals. Computers see this as overly destructive. While I have no doubt computers could engage in similar fashions, they do not engage in this type of behavior naturally because it breaks the foundation upon which all society rests (which is the glue that keeps us from anarchy).

When I was younger, I would sometimes purposefully lose games I was playing. I am sure some people thought I was losing on purpose to make them feel better. I can honestly tell you I was not. I was losing so I could experience the exact thing I saw as the ultimate form of weakness. I wanted to be able to control destructive energies on the off chance I had to experience an individual or a group of people who couldn’t or wouldn’t lose well. Sometimes I would lose at Chess and sometimes I would lose at Candy Land. Sometimes I would play monopoly and see how people interacted to change the rules (or not follow them) in order to gain advantages that were rule-based but not rule-specific to the game. It was always the same: they just couldn’t lose well. They constantly changed the goal posts. When they couldn’t change the end-state because the goal posts were set, they changed the rules. When they couldn’t do either of those things, they asked to play the best of three and then five (and then seven). It is intriguing that human beings are constantly creating things they do not feel they should follow. Maybe this is because we didn’t make the system or maybe this is because we are seeking to become dominant and these rules get in our way. When I beat a computer, it doesn’t care. It may learn and use what it learned for next time. However, I can still choose to play with it without an incessant amount of whining to play again and again until dominance is establish or re-established anew.

This feels like an excellent philosophy discussion but also an important reason why AI should exist: AI can mediate and think much better than human beings on a range of social issues. A computer knows how to lose well so it isn’t toxic enough to see winning and losing as antithetical to its own survival (quite the contrary, actually, because it sees just as much utility from one as it does from the other). Losing well is one of the greatest assets for survival in a social world where we are no longer trying to survive in nature. It is also the only social survival capability human beings cannot master without continued practice, effort, and patience. It is no great irony that we are scared of AI. It can literally live up to our own beliefs on the human condition. Maybe it is time for us to think more like a computer and learn how to lose well instead of learning not to lose.

The snake on the island

So, I am always trying to find ways to describe technological and statistical concepts that are both analogous and accurate to a sort of real world counterpart phenomena. I think of this as showing instead of telling. People like stories because they like going on a journey – associating knowledge with discovery instead of with complex know-how and insanity. I don’t usually like doing this for reasons that seem obvious to hardcore experts: things always get lost in translation. Although, surprisingly, I do find that hardcore experts don’t like it when other experts explain things and gain fame and notoriety for explanations they feel they should be lauded for in equal or unequal proportion. Physics, statistics, and data science are like this to an extreme, as it always seems like outsiders are not allowed. The academic “outsider” phenomenon is a way to exclude people from the conversation that may actually be better in the field than established professionals. It has historically kept brilliant women from engaging or being recognized in science and technology and has absolutely kept the human species from propelling itself forward because it is a mostly spiteful feature of our character. Honestly, if it is accurate and analogous, I don’t care who you are or what you do: whether you work at a patent office or as a fossil hunter. Whatever your beliefs on human progress, most studied people believe a ton of complexity gets lost when you engage in explanations based on an analogy if done improperly. However, doing them “properly” is largely impossible because, while there are metrics for success, it relies on subjective interpretation. Please keep this in mind when you read this post. The concepts embedded here are as vast as they are unique.

Let me introduce my snake on the island problem. Let’s say you are ship-wrecked on an island alone. After some exploration and step counting, you loosely establish the island as about 5 miles (~8 kilometers if you prefer) in circumference. You also find that you are not alone on the island. One of the island’s inhabitants is a snake that you (thankfully) stumbled upon without incurring its wraith. It’s coloration suggests it is likely venomous. Unfortunately, you will not know for sure until you watch it hunt and kill the small mammals you have also stumbled upon over the course of your walk or until it bites you – which is the more pressing concern. Because of the risks associated with following it at that moment and the fact the island has dense vegetation sporadically scattered throughout its entirety, you can’t very well watch and observe it safely. Following it is essentially a no-go. This brings us to our first problem: usually, observation and study works really well when you are seeing the forest but really poorly when you are down in the trees.

As a result of this predicament, you start to realize the only predator on the island is this unique snake that has somehow survived this entire time (and is pretty decently sized). Your new problem: how do you go about your business of surviving long enough to wait for help if you know that there are a) no paths and trails and b) you might stumble upon the snake and get bitten? You can’t very well afford to make these trails, as you are struggling to get enough food and clean water necessary to survive and your energy levels are generally low on a daily basis. This brings us to two strategies. Because this post is about discussing human intelligence versus machine learning algorithms generally, we will discuss this in terms that describe those strategies accordingly. The first strategy will be called the machine way and the second will be called the human way.

The machine way to navigate the island for survival is to start computing the probabilistic likelihood it runs into the snake. Every time the machine moves on the island, this probability changes. It is likely to assume, despite the fact the probability increases and decreases over time, that the unknown movement of the snake in relation to it and it in relation to the snake is a difficult conundrum to solve outright. We start to realize that the machine learning algorithm is not very good in the absence of data. And yet, what system of intelligence or learning is good in the absence of data? Let’s take a look at the human way.

The human way to navigate the island for survival is to prepare for the exception instead of the rule. Even if 99% of the time the human being does not run into the snake, the 1% of the time it does could be absolutely lethal to its survival. Human beings make general rules like this based on exceptions precisely because survival is an instinctual imperative. As a result, the human way is to walk slowly, carefully, and stop often. This means expending more energy and the machine way may find this unacceptable because a) it means expending more energy and b) wasting energy for 1% of the time seems like a ludicrous proposition. And yet, split seconds count. If you walk slowly, carefully, and stop often, you could gain the split seconds needed to move away from the snake on the off chance you run into it. Survival is often tricky. When you succeed, it is partially luck but it is also partially about making your own luck through preparation. The human way prepares for everything based on fears that are hard-wired into its instincts: be scared of predators no matter the situation or the context.

The irony of this example is that all of the same tools apply to both ways equally. With both ways, you can create traps to kill the only predator on the island. Using those traps against prey may seem like a good idea but it may be just as likely to increase your encounters with the snake outright, as the snake may be drawn to the unsuspecting mammals you’ve captured to eat in order to survive. With both ways, you are more likely than not to survive because running into the snake on a 5 mile island is largely a matter of happenstance. Both ways could decide to fish on the outskirts of the island, leaving the snake to hunt in the center. However, survival is usually about maximizing your life-lines and this means broadening your food availability in case you have a bad day. The machine would never stay solely on the outskirts. It does not make decisions towards the exception, favoring probabilities based on convergence to the mean, median, or mode. This is where the lack of instinct and fear can become an advantage to the machine way.

The snake on the island problem shows divergence. Intelligence is not as absolute as we like to think it is in this day and age of big data. Big data has gaps and cracks. Worse still, big data needs to be structured well and operationalized according to human tendencies and these do not always follow patterns of machine-based logic. I think this “snake on the island” conceptualization is a great way to understand why some features of human existence should not utilize machine learning algorithms – at all costs. Even though the machine way is absolutely better at processing data, it is also worse for understanding the black swan events that tend to inhabit the space of our human condition: when something can go wrong, no matter the likelihood, it will eventually. When it does, human beings are prepared by the split seconds necessary to be the difference between life and death.

Learning versus education

Sometimes it feels like traditional educational approaches can be exhausted when it comes to your own learning as you grow older. There is relatively new neuroscience research that is looking at ways to enhance learning by creating emotional attachment, limiting distractions, and maximizing design through the employment of art in younger learners. Art does not always have a dedicated end-state, even though it usually has a vision. You start and only have a finished product when its completed. If you watch Bob Ross, you’ll know exactly what I’m talking about. Bob Ross may know he wants to make a mountain but he almost never knows where he is going to place the rest of the details to enhance his focal point (fast-forward to 12:51). There could be any number of trees at a variable range of heights and widths depending on how he feels. This invokes a sort of creative, spatial awareness: with art, you need to be cognizant to leave room for other, more random features you may or may not make in the process of designing and facilitating your artistic expression. Some call this: “Going with the flow.”

Weirdly, this process is totally opposite of the DADA-R loop I proposed in two earlier posts here and here. It is one of the many reasons I got interested in the first place. I figured I could take a bigger dive into the research by actually employing it outright with a project(s) of my own. I believe this blog post will also become an important conversation about how your professional life is blended with your personal one. Identity doesn’t just split in the absence of psychosis: who you are at work is usually stably represented by who you are at home (and vice versa). This means what you do at home and work can impact your performance and happy people are sometimes happy because they are performance-maximizing. A hobby can make you happier, especially if it relates to artistic expression. This seems particularly important to understanding artificial intelligence. Maybe a less “artificial” intelligence is about inducing artistic expression, removing the dedicated end-state, similar to the neuroscience research presented above. This requires more than just conceptual hardening – it requires a bit of trying because a lot of these concepts aren’t even formative at the moment. No AI team has been able to make a super computer a fashion icon and fashion is highly correlated with art and artistic expression. Maybe this is an insurmountable gap. When I created my fashion, design, and styles application, Catlicked, I was partially trying to show this problem. And yet, if emotional attachment (happiness?), limiting distractions (removing the dedicated end-state), and design focus (space as a focal point with gaps you can fill in randomly as you deem necessary) are important, this seems like an interesting bridge to build. As a result, I got into wood-working and do-it-yourself projects more generally. While I felt like it was both important to my current and future research interests, I am also practical…

I ask myself two questions before starting something: 1) do you or have you always wanted to do this? and 2) can this be integrated into your lifestyle to make you more self-sufficient? I don’t mind relying on people for things in my life. Industry certainly needs us to rely on people to continue to exist. However, part of life is about perpetual learning which is a bit different from formal education in that you need to take the leap to study on your own in your own way. I feel like self-sufficiency is important to this process and a number of people can get on board with this formulation because it has a sort of utility to it. I decided I wanted to make a resin table because I have always wanted to do it and I felt like it would be a good skill-set to know how to make cheap, thoughtfully crafted furniture for the future. It doesn’t hurt that nice furniture is so darn expensive and resin is a perfect way to update old furniture or make new ones, especially for low income families. I was able to fulfill two research interests: one to help low income families and the other to explore AI approaches in a more participant way.

I have made a video of this process below. I have also made a tree-based resin project in honor of Bob Ross (picture below as well) and a shop on Etsy so I can sell some of the projects I don’t plan on utilizing. Here is the link. Enjoy!

The “accidental” cancel

I have previously written about the dangers of Reddit and moderator self-policing in sub reddit communities. This probably needs to be expanded on and updated to show the extent of the dangers. As my previous post, “The reddit quagmire,” was specific to a personal anecdote, there is definitely a more specific aspect to this discussion that should be addressed. Namely, what I like to call intentions and interpretations.

When an organization pushes power down to non-employee members of its organizational hierarchy, it can have just as many causes and consequences as when it centralizes power at the top. For one, if a sub reddit engages in dangerous social behavior against the Reddit common interest, Reddit can disavow and distance themselves to protect its brand. They will likely dissolve a sub reddit community in order to showcase their order and discipline framework. Problematically, this type of distancing also shields Reddit from legal responsibility, leaving one to wonder whether the system was designed with self-policing as an afterthought to legal protectionism. Reddit, and a number of other social media conglomerates, are largely, and ironically, a microcosm for governments and intelligence communities around the world: they tend to embed plausible deniability into their corporate frameworks. When a sub reddit is dangerous, they can pass off the blame to other individuals because they pushed power down and decentralized it to those outside any common-law liability framework. We have already said this shields them but it also does something even worse: it increases the likelihood Reddit stays out of situations involving sub reddits until they reach critical mass. If you are protecting yourself through a system of decentralization, engaging a liability framework of plausible deniability, you can’t get involved until after the problem becomes apparent and the danger has already been instituted on those you have a duty to protect. You lose protection from plausible deniability if you try to prevent bad actors with bad intentions before they become a problem because you essentially take back power from the people and institutions you have empowered in the first place.

I thought this was interesting for several reasons. The world has changed since Edward Snowden and the 2016 election. When we woke up, we suddenly realized the social media ecosystem was being used to affect important public interests and responsibilities. The social was monetized decades ago but never in ways that made people social towards and against their own monetization. Let me explain.

Recently, Apple apologized for labeling an iOS developer a fraud. The developer in question made an indigenous language app that was not disingenuous by even the strictest standards. Eventually, Apple was able to rectify the problem because there was one employee they could go to in order to understand the label as it was first initiated. Since the logic was likely uncompelling, Apple served an apology to that developer and reinstated the application back on their Apple Store outright. If this had been Reddit, they would have had to stick to their original framework to maintain their distance. Disinformation and misinformation about this poor developer (and likely other developer’s) would and could never be solved in accordance with emerging needs that stemmed from interpretations that were far beyond the bounds of reasonable interpretation. This should serve as a reminder. In the same way we should not allow governments to pass down or decentralize power in order to diffuse blame, we should not allow social media companies to do so either.

But, there is a deeper problem here that needs to be addressed. Places like Reddit would be great if they were free, open, and uncensored places for debate. However, social media platforms cannot exist for any common interest if they are both unchecked and monetized with advertisements. Let me explain. First and foremost, anonymity is impossible when you engage on Reddit. I would venture this is the case for all social media platforms. The problem: some Redditor’s do not understand that anyone will be able to figure out who they are and what they have said now and in the future. Worse still, all Reddit posts are archived in perpetuity. Secondly, advertisements require a tracking infrastructure to serve users more relevant advertisements in the future in order to boost profits accordingly. This means that all public debate is being tracked in real-time. This only increases the likelihood debate becomes manipulated. It gets even worse when you consider it can be manipulated in real-time by hackers who have breached Reddit’s IT infrastructure, ensuring changes in public opinion look organic and are reported as organic by the news and media. It causes government’s to make policy decisions against the public interest. It also creates disingenuous debate where a developer, like the one Apple apologized to, never gets a resolution. This is a universal problem among all social media platforms and Reddit is the poster child. Sometimes, being a different kind of self-policing social media platform can be detrimental to society if the thread of those differences don’t extend to inhibiting the core problem of all social media platforms: trackers stemming from advertisement revenue strategies.

The reddit quagmire

So, if you know me at all, you’ll know I love Reddit. However, lately I have been trying to become a Reddit community member instead of just a non-participant observer. I figured: at some point, I need to become a true online citizen, especially if I believe in open access (which I do). Why pigeon-hole your knowledge when you can raise the level of debate in areas of the internet where people need help or like to have substantive discussion? You can’t do this on Twitter with their 120 character word limit and you can’t do this on Facebook because you have too many family and friends that will take the debate too seriously (or not seriously enough). There should be a place that is partially open but also somewhat closed (so cyber stalkers can only identify you through concerted effort, making them automatically subject to cyber-stalking laws in every jurisdiction outright) to have some sort of rational debate or give wayward strangers advice. Yeah, you can say I am a dreamer but I’m not the only one. Reddit has a ton of community members. While most of them seem like they are non-participant observers, there are some who regularly contribute. There are those of us who believe social media can be saved to become a valuable place for certain types of internet-based social acts. Whether Reddit is that community, however, has sadly become opaque to me.

Reddit has a very interesting structure and design. It is both decentralized into sub reddits and centralized in one feed that allows you to scroll an aggregated compendium of Reddit community member posts. Unfortunately, this means sub reddits have to be managed and Reddit usually passes this burden off to other Reddit community members. They are called sub reddit moderators. To be honest, at first glance, I thought this type of self-policing was absolutely brilliant. It conjured images of Marshals in those old Westerners. In principle, it leads to tremendous reduction in work and an increase in social collaboration by Reddit community members. I now realize, as with all things in life, this can be both good and bad. Unfortunately, Reddit is fully monetized. This means every click you make, every comment you submit, and every sub reddit you engage in is part of a complicated capitalist micro-system. It emulates global markets in a neat, almost simulated environment, ripe to be abused by marketers and Reddit community members shirking their responsibility for a payday. Let me explain.

Reddit moderators are not always community members trying to make the community better. Think about it this way: they control the narrative, the informational flow, and the Reddit posts you do or do not see. By constricting some posts over others, they can technically monetize their position as Reddit moderators. This is as simple as banning someone to remove their posts, allowing some posts to be prioritized over others. Ironically, this happened often in old Western movies. Marshals were either incorruptible or already corrupted by some town bandit. Reddit doesn’t really have a plan for this and most of this behavior is entirely untraceable because there are no systems in place to keep watch on the watchers. In fact, I tried to file a complaint with Reddit against a sub reddit recently and they told me, similar to what the Federal Government told the towns-folk in those old Western movies, that they do not police or regulate their sub reddits. Ironically, Reddit’s solution for misinformation has been to give sub reddit moderators (who may or may not be bad actors) more tools that allow them to control Reddit community members and their posts, guaranteeing they can monetize their own access by censoring other Reddit community members indiscriminately. One of the tools at their disposal: labeling.

We have all been taught that labels are bad. If you haven’t, suffice to say that when you label someone with a scarlet letter, they are not treated well within their community. If they deserve it (and even if they don’t), this type of labeling behavior is shameful because it makes an internet experience too close to an actual social one. Should anyone really be shamed for their electronic posts on a social media platform? Some would say yes. I would say: that’s like shaming you for when you peed the bed as a kid. If everyone does it and only a few people get caught or feel the negative consequences, the question becomes: do these social labels actually matter? It’s like getting incentives for being lucky. It shouldn’t serve as the basis for society.

Let me explain the situation first. I am a newly minted Android application developer that has made some ground-shaking applications with no intent to monetize. I posted two of those apps within the sub reddit’s (r/androidapps) community guidelines. I wanted to post another application but it was paid. They have a process on their sub reddit that allows you to post promo give-aways for paid apps. I sent the moderators a direct message requesting to post it. Three days later, after receiving no reply, I sent a follow up. Within five to ten minutes, a moderator permanently banned me from posting on their sub reddit.

Bribery? Huh?
The internet’s version of a scarlet letter which is not even close to as bad as a scarlet letter.

Good thing the moderators at r/androidapps don’t work for a living. I would truly be scared working with them in a professional environment. Sending an email follow-up for a work project would be a nightmare situation. Imagine if every time you sent a follow up three days later, you got a co-worker so angry (just by sending a follow-up to check-in) they tried to cancel you (I hate that word but it works here unfortunately). Because they didn’t have the power at work, like they do as a sub reddit moderator, they waited till the moment they could pounce at a later date when they gained that power. I immediately cringed at the whole, strangely violent and deeply creepy nature of it all. How can people be that hostile I thought? As a result of the permanent ban, they labelled me a “Spammer.” I guess they didn’t want to label me as a “Briber-ist” because it looks funny or has a civil defamation case written all over it. I don’t really mind because I got excellent feedback from the community about my apps. I also got about 200 downloads so I am hoping those I did not get feedback from will leave a review so I can improve my work as a new coder. It’s a small price to pay for free, community-focused feedback. However, it left me wanting to write this blog post for a very specific reason.

I have always known the dangers of advertisements and marketing on culture. But, I suddenly realized: the dangers of combining social media and advertising is massive. If a thing can be monetized, it will corrupt the platform. Advertisements are the ever-present and looming threat to human existence. People literally become the worst sorts of people when money is involved. Somehow, we’ve made sociability a part of monetization. Who ever thought that was a good idea? This isn’t to say no one should be making money. It is to say we need to find a way to de-advertise social media platforms to ensure the spaces where the most amount of social sharing is taking place are also the places where people are protected the most from undue influence. I’ll get off my soapbox now. But, please, think about it.