Gays Against Groomers Get Financially Deplatformed
/in Culture War, Economy, Education, Environment, Featured, Liberty, National Security, Politics/by Rod DreherEditors’ Note: A long article but very much worth the read to alert and warn all citizens of the growing surveillance state and increasing control of Americans by the collusion of state, corporate, and big tech power in every aspect of the lives of ‘We the People’.
Dissident groupĀ forbidden by PayPal and Venmo from using their services. This is how the social credit system will be used against us all
Just like that:
This is how soft totalitarianism works: no gulags, no jail time, just being excluded from the marketplace. We are rapidly approaching the point where one may not buy or sell without permission of the Regime.
This is also how soft totalitarianism works: the “Regime” is not the State alone, as in the earlier iteration of totalitarianism. It is rather the informal coalition of elites in government, media, finance, academia, and private industry (Yarvin’s term “the Cathedral” is also good) who share the same illiberal left-wing convictions, and act in concert. It is Venmo’s and PayPal’s right to do what they’re doing. But the effect is bad for democracy.
It’s like with Amazon, when it decided not to sell Ryan T. Anderson’s book critical of transgender ideology, and similarly-themed books. It’s Amazon’s right –– but if Amazon, with its dominant market share of the book market, decides that it will not sell a certain kind of book, then that kind of book will not be published.
It’s entirely legal. Do you want a system in which a bookseller is forced to sell books he finds immoral? I don’t. But in Amazon’s case, making a fully legal decision has dramatic consequences of freedom of speech and debate.
I don’t know how this should work, in terms of legislation to solve the problem of financial deplatforming. But this is an issue conservative, libertarian, and authentically liberal politicians should start talking about –– and, when workable policies and laws present themselves, then acting on them. If not, people who dissent from the Regime’s ideology will find themselves more and more driven to the margins, and forced through non-violent means to comply.
Two years ago, just before the publication ofĀ Live Not By Lies, one of my readers e-mailed me (I posted it at the time; this is a reprint):
I hope some of this will be of insight or use to your readers and maybe offer some advice on how to reduce the impact of the coming Social Credit system on their lives. To start, I am a software developer who has worked with machine learning and āAIā professionally. While itās not my primary focus of my daily work, I tend to keep up with developments in the field and am skilled enough to use it to solve problems in production systems ā in other words, things that are consumed by end users and actually need to work right. Problems I have used it for are recommending content to users, predicting essentially how long a user will remain engaged on a site, and so on. Properly used, it is an extremely powerful tool that can improve peoplesā lives by serving them content they would be more interested in, and not wasting time with stuff theyāre not.
Maliciously used, wellā¦at a minimum, it can be used to manipulate people into staying on a site longer by serving content that is designed to stimulate the brainās pleasure centers. Facebook does this to keep people reading items which are tailored to what experience they tend to prefer and things theyāve liked. If these things can be used to increase a siteās user engagement even by only a few percentage points, it can pay off big in terms of increasing ad revenues. Other practical applications include quality control, homicidal Teslas, security systems, and so on.
Unless you work within the field, most people donāt understand what artificial intelligence can and cannot do. To start with, the definition is misleading. Computers have not yet attained true āintelligenceā in the human sense. We are probably a ways off from any system attaining consciousness in the classical sense, though I suppose itās worth pointing out that systems can be designed to act rationally within a given set of information that is presented to it. DeepMind, Googleās AI āmoonshotā company that intends to create a true artificial intelligence, has designed a system that can play old Atari games without any instructions. While I havenāt read more into the details, I would imagine that this happens by the system trying an action and see if it results in a more positive state (losing fewer lives, achieving a higher score, etc).
On the other hand, computer games with artificial intelligence generally donāt use true AI techniques to simulate a player, as itās too computationally expensive and often gives less than desired results. A good example of this was a strategy game where the AI simply had its units run away to an isolated corner of the map because this was the most ārationalā decision it could make within the framework of the game. In many senses, though, a true thinking machine is on the same timeline as a flying car or fusion power.
That said, a social credit system does not actually need true artificial intelligence to function, and this can actually be done with some very simple techniques. You take a set of data points and then run them against an algorithm to determine the likelihood of there being a match of a person against that data set. For example, if youāre trying to determine if someone is an active member of a ābigotā denomination of Christianity, or holds their views, you would find some people who fit the profile, then extract data points that distinguish them from someone who is not, and then check unclassified people against those points.
So, if you know a ābigotā Christian shops at a religious bookstore, gives more than five percent of their income, frequents certain forums, etc, then you can develop a data profile of what this type of person looks like. In turn, if someone only meets one of those three criteria, such as visiting forums but not engaging in the other two activities, the possibility of them being a ābigotā Christian is much lower. Maybe theyāre just an atheist trolling on the forums. Likewise, if they visit a religious bookstore, they might just be browsing or buying a gift, and this would require adjusting the algorithm inputs to filter out casual shoppers.
The big challenges involved in doing this really are not actually running the algorithms, but classifying and processing the data that the algorithms operate on. This is where data science and big data come in.
What happens is that there are statistical analysis programs which can be run against a data set to begin filtering out irrelevant things and looking for specific patterns. Continuing with the above example, letās say that a both a ābigotā Christian and hardcore woke social justice warrior buy bottled water and celery at the store. This data point doesnāt distinguish the two, so that it gets tossed out. By refining the data through statistical techniques, it quickly becomes clear what points should be looked out to distinguish membership in a given data set. The SJW doesnāt attend church, or attends only a āwokeā one, so they can be filtered on that point. This is why seemingly innocuous things like loyalty cards, social media posts, phone GPS and so on are actually dangerous. They essentially can tie your life together and build out a profile that can be used to classify and analyze you in ways with connections you never thought would be made. All it takes is building ātraining setsā and then throwing live data at these sets to have a usable outcome.
Ultimately, the power of all this is to be able to do on an automated basis what would have been a ton of legwork by a secret or thought police. Even if, say, there is still a secret police or social media police force involved it making ultimate decisions about how to handle a particular report, machine learning can help sort out private grievances from real dissenters by looking at what distinguishes a legitimate report from a fake one. No longer does expertise have to be produced and retained via training and personal experience, but can now simply be saved to a database and pulled out, refined, and applied as needed. These routines can be packed up and distributed and run at will, or run by accessing cloud servers. Huge numbers of profiles can be run at any given time, too. While building the profiles is computationally expensive, running them is very quick.
The other thing that people in your comment section donāt grasp is that this is not a political issue in any sense of the word. Tech and the consolidation of power doesnāt really have a right or left to it; it is about technocracy.
The reality of why this will be applied to people opposed to critical race theory is simply that opposing CRT means that you are a skeptic, that you still think 2 + 2 = 4, and oppose what the elite are teaching you. People who think this is overblown or wonāt apply to them, whatever their politics, are naive. Anything which militates against accepting a common vision is what is the marker here, and it could be anything else down the road as trends change and what is acceptable shifts.
Driving a gasoline-powered car, having your recycling bin half full, or buying bottled water, might all be things that impact your social credit score if it begins to be applied to environmental issues. Drink three beers instead of two at a restaurant? Youāre going to be flagged as an alcoholic and watch your health care premiums shoot up, or perhaps lose it altogether if we have a single-payer system. The true evil with this is how it dehumanizes people, categorizes them, and removes their individuality by reducing them to a statistic. āIām not a number, Iām a nameā no longer applies. Mental illness is overrepresented in the tech world, and you run into all sorts of people that are narcissistic, sociopathic, and so on. How well and fast and elegantly something runs is the true yardstick, not if it is ethical and moral.
Also, the notion that there will be laws against this sort of thing, that there will be a legalĀ deus ex machinaĀ that will stop this soft totalitarianism, is just laughable. Things like GDPR [General Data Protection Regulation,Ā an EU law ā RD] are a step in the right direction, but data and web services are so interconnected today that trying to erase all your digital tracks is going to be very difficult. Besides, if youāve been offline for several years, youāre trying to hide something, right? Tech used to be full of people with a very libertarian and free-thinking mindset. This was also when it was at its most innovative. These days, identity politics is pushing out libertarian politics, and the idea of curtailing speech and access for people who are āracist,ā etc, is not just accepted but promoted. Even if law doesnāt come into it, technology has been biased against personal freedom and privacy for a long time.
If nothing else, underfitting a data curve ā that is, too broadly applying ā might result in people being unfairly barred from banking, jobs, public transit, and so on. Think of the poor guy inĀ Fahrenheit 451Ā that the mechanical hound killed in the place of Montag after he escaped. You likely wonāt be able to appeal, as this would reveal too much about the algorithms that are being used by the system. Maybe youāll get scored differently again after a period of time, but there is no guarantee of that, either. The system will always err on the side of underfitting, too. In the new Sodom, there are not fifty righteous men. Everyone is guilty of something.
Dealing with this is trickier than it might seem, but can also be spoofed somewhat. As you point out, Chinaās system relies on cameras being present everywhere, and also associating with people who meet certain criteria to have a lower score. The first and most important thing to remember is that you are going to have to be cautious and be fully āunderground.ā Public pronouncements that run contrary to the acceptable narrative are going to be an automatic black mark on your score. Keep your mouth shut, donāt post memes on Facebook. Otherwise, youāre going to suddenly find that your bank account is locked for āsupporting extremism,ā and youāll have a pink slip waiting for you at work the next day.
Now, getting down to practical matters, if you and someone with a low social credit score ride the same bus to work, probably not an issue, but if you ride the same bus and then have lunch together, big red flag. Will sitting next to each other matter? Maybe, but you might also get on at the same stop, so this would be less of a red flag, particular if you donāt talk to each other beforehand. Change where you sit on the bus, sit together some days, and not together the next, and change your seating position relative to each other. At some point, it becomes increasingly difficult to develop rules from a pattern and your behavior might be thrown out as an outlier. This is going to be harder to maintain if you need long-term interaction with someone, like at a prayer group, but can still be used for some one on one time, important communication between groups, spreading information manually, etc.
Speaking of prayer groups, the obvious answer is to congregate in places where there is not likely to be any cameras. As they become smaller, less visible, hooked into networks, and with better low light performance and resolution, itās going to be increasingly difficult to know when youāre being watched and when youāre not. Iād expect parks to be monitored, and if not inside the park by drones or cameras, then at least the entrances and exits. Same group of ten or so people going to the same place every week for the same amount of time, especially if one or two are known to be problematic? Big red flag. On the other hand, people meeting in random locations, in varying sizes, in varying times, this might slide by the system and be lost in the ānoise.ā
Phones will be tracking people, of course, and phones being in the same location at the same time, big red flag. If you leave your phone at home, hope that you donāt have an internet of things device connected, or a camera on your building, or youāll be known as a person who leaves their phone behind. Big red flag. If you leave your phone in your dwelling, but are seen to go exercise without it, maybe less of a red flag. Just donāt be gone for three hours on a āquick run.ā
There is also the idea of too little data being available, a āblack holeā if you will. If you donāt carry a phone, usual social media, obviously associate with anyone, and so on, youāre likely going to be flagged because youāre seen as trying to hide your life and activity. Itās worth noting that phones are basically ubiquitous in Chinese society, and people were trying to estimate the actual impact of Covid-19 in China based on how many fewer people were buying and using cell phones after December of last year. Why are phones ubiquitous? Because people need their positive behavior to be recorded.
Ultimately, the idea is to either engage in the bounds of normal behavior or engage in behavior that doesnāt meet an expected pattern and will likely be trimmed as an outlier (assuming outliers arenāt investigated as well). If you need to meet, do so in a way that would be socially acceptable and plausible, like an art class or a pickup sports game in the park. Taking a break on a random park bench with random people at random times might work as well. Use written communication or learn sign language to avoid bugs (conversations can be analyzed for keywords as well). The thing is, the more effective a social credit system becomes, the less human intervention and legwork there is likely going to be. No oneās going to bother with looking at what you wrote on a notebook, because it would take too much effort to track someone down and actually examine. They care more about where you go, when you go there, whoās there, and so on. The faith in technology is such that there is a strong bias against manual intervention.
No, none of this is going to help us stop the soft totalitarianism, and I have been repeating it over and over that orthodox Christians and other soon-to-be-unpersoned groups need to really start understanding and preparing for a life as āuntouchables.ā If you post the wrong thing, say the wrong thing, hang out with the wrong people, your card is going to be cancelled, no other banks will pick you up, youāre likely not going to be able to get a job due to a low score, and so on. You might not even be able to pick up menial work. Under the table work will be gone once everything needs a card for a transaction. All cards are issued by banks and most of them are woke. Think there will be a ārogueā bank that will do so? Good luck with that. If you think, okay, you can start your own small business growing and selling food, other goods, etcā¦you need to be able to buy and sell supplies. Cashless economy requires having a card and an account. You wonāt be able to open an account due to āextremism.ā
As youāve been hammering home over and over again,Ā now is the time to form communities.Ā These can quite literally provide support and safe harbor for internal exiles, if they are careful. This isnāt just about maintaining the faith, but about maintaining those who will have nowhere else to go. Barter, self-sufficiency, low tech, these things are going to be massively important.
From Live Not By Lies:
I’m on my way to Canada now to give a couple of LNBL-themed speeches. I have more to talk about now. I do every day.
*****
This article was published by The American Conservative and is reproduced with permission.
TAKE ACTION
The Prickly Pear’s TAKE ACTION focus this year is to help achieve a winning 2024 national and state November 5th election with the removal of the Biden/Obama leftist executive branch disaster, win one U.S. Senate seat, maintain and win strong majorities in all Arizona state offices on the ballot and to insure that unrestricted abortion is not constitutionally embedded in our laws and culture.
Please click the TAKE ACTION link to learn to do’s and don’ts for voting in 2024. Our state and national elections are at great risk from the very aggressive and radical leftist Democrat operatives with documented rigging, mail-in voter fraud and illegals voting across the country (yes, with illegals voting across the country) in the last several election cycles.
Read Part 1 and Part 2 of The Prickly Pear essays entitled How NOT to Vote in the November 5, 2024 Election in Arizona to be well informed of the above issues and to vote in a way to ensure the most likely chance your vote will be counted and counted as you intend.
Please click the following link to learn more.