×
Login Register an account
Top Submissions Explore Upgoat Search Random Subverse Random Post Colorize! Site Rules
7

uptick in chatter around phrase "gates of hell" in mainstream.

submitted by prototype to chatter 5 monthsNov 29, 2023 14:10:04 ago (+10/-3)     (chatter)

Not interested in the Q narratives and all that.

Just saw it doing my daily site scans, then again in my news caption scans, and also heard it casual conversation.

Wasn't searching it out. Wasn't reading or involved in anything that would have relation to this phrase, but its coming up with enough frequency to be 1 deviation outside the mean.

Just seems odd.


11 comments block


[ - ] squidicuz 4 points 5 monthsNov 29, 2023 14:31:10 ago (+4/-0)

they must have broken the 7th seal? prepare the trumpets!


but ya ... odd.

[ - ] Sunman_Omega 2 points 5 monthsNov 29, 2023 14:37:22 ago (+2/-0)

Not yet. There's no black sun or boiling seas yet so the sixth seal is still in place.

[ - ] registereduser 0 points 5 monthsNov 29, 2023 15:34:57 ago (+1/-1)

[ - ] deleted 1 point 5 monthsNov 29, 2023 15:18:56 ago (+1/-0)

deleted

[ - ] yesiknow 0 points 5 monthsNov 29, 2023 21:49:16 ago (+0/-0)

The foundations drop it through the media. The article writers get "suggested" those phrases, so do stand up comics and talk show scum.

One weekend I had three papers a glossy magazine and some other publication that all had the word "ubiquitous" in them. THe media has an extremely narrow vocabulary. That was a long time ago. They've been organised forever.

[ - ] prototype [op] 1 point 5 monthsNov 30, 2023 03:52:29 ago (+1/-0)

The foundations drop it through the media. The article writers get "suggested" those phrases, so do stand up comics and talk show scum.

Thats the beautiful thing, if you have the right data mix, than you know what the distribution and norm are, so when you get upticks like that its a matter of doing the following, though this is from memory (haven't looked at the code in a while):

sig = 3
incidents = []
for unwrap(item, n) in n:
current = std(dist(item, n))
baseline = std(dist(sources))
if current > baseline and (current-baseline) >= sig:
incidents.append(Incident(str(item), insublist(item, n), dif = current - baseline))


unwrap unwraps the list into multiple iterators using some preprocessing magic.
dist gets the distribution of a sublist relative to a larger list (also uses unwrap).
And all we're asking here is what is the standard deviation of this incident.

Then I run a timeseries with a sliding window to look at whether it is normal in-window and outside
the window and log other periods in time where it deviated significantly by the same amount. It's how I came up with the four stage polycrisis model posted elsewhere.

In any case, by doing it this way, a baseline is established, so even if theres a general uptick in new phrases, or a flurry of media activity, they can't game the significance value, because its dynamic and based on many sources.
Essentially its a measure of how astroturfed a message is. The more multiple 'opposing' outlets use a key phrase, even after normalization, the more obvious it is in-distribution. And I just run a filter over those, and whats left are the significant phrases/mentions/events that haven't been gamed.

[ - ] bossman131 0 points 5 monthsNov 29, 2023 19:01:57 ago (+0/-0)

Bill 𝙂𝙖𝙩𝙚𝙨 can go to 𝙃𝙚𝙡𝙡@!

[ - ] iThinkiShitYourself 0 points 5 monthsNov 29, 2023 18:51:38 ago (+0/-0)

Is this about the zion don paying for all that bad publicity against himself?

[ - ] prototype [op] 2 points 5 monthsNov 29, 2023 19:21:36 ago (+2/-0)

Is this about the zion don paying for all that bad publicity against himself?

Thats a relevant line of discussion, but I haven't a clue.

I just know that if you scan enough news sources, at a large enough scale, for common n-grams, theres a none coincidental causative link between observed phrases and eventual outcomes.

For example, its what allowed the u.s. to predict the arab spring (and gave the u.s. opportunity to pour fuel on the fire and co-opt it).

Theres these weird convergences in the actual chatter, the large-scale collection of daily utterances, news, text, and discussion, that some how correlate with upcoming short and mid-term events, and if you have access to a large enough corpus, it can be pretty accurate.

What I do is on a much smaller, targeted scale. Part of it is understanding organizational thinking in order to analyze what sources should be included for daily scraping and scanning. The model is five parts, 1. leadership style, 2. cohort interest (short term, their immediate circle and party), 3. contributor interests (mid-term interests, in the u.s. that would be lobbyists, unions, military contractors, etc), 4. counter-offensive projections (known and potential playbooks), 5. FICM - (foreign interests, competitors, and counterparty threat models), which compose the stage of all other competitors in a given environmental level (international, regional, market, military, inter-organization, etc) and their relevant analysis under this model also.

By taking all this shit into account, you can basically say whether some conclusion is 'reasonable' (for some measure of 'reasonable') or not, and if so, how reasonable it is relative to other presumptions.

From there you derive probable indictators for one play in a playbook versus another.

With that in hand, you go to each news source, be it discussion groups, forums, 'alt news' or even commentary channels, and you ask 1. where does their funding come from (this is not always obvious - for example a 'donation' run 'alt' news channel might be getting donations from a bunch of sockpuppets for an intelligence agency), 2. who have their writers/producers/owners worked for in the past, 3. what is their policy position on xyz topics?

And you riff on that. It's immediately easy knowing these things, to distinguish who has an agenda, and whose just baiting for clicks (lot of noise in the signal, in many instances, intentional). Once you know who has an agenda (who is actually connected to operations other than noise designed to confuse the public), you now have a fixed point to operate from.
Because they have an agenda, some things they say must always be lies, because it goes against their own interests to report them accurately. Somne things they report must always be true, because it harms competing interests. And some things they will never report on.

The best datasets are evenly balanced between the various competing interests and agendas intra-category (be the organizations NGOs, nations, agencies of a nation, etc). Commonalities of news sources can be gamed to spread false perceptions of course, but that falls more under the purview of militarized information-operations and IO funnels anyway, and those are easy to spot from a mile a way. Sometimes of course, you can't help but participate because those are the only outlets for certain types of discussion, such is as often the case on alt-media forums, but I'm going off on a tangent.

Using this, you weed out any news sources that aren't useful, any you're unsure of, and select only the ones that have obvious agendas. Basically you want to be where the propaganda is, because the signal from one site cancels out the signal from another when you normalize event frequency statistics across the respective n-gram phrases.

And what emerges from that is all the organic chatter, almost entirely devoid of any influence from the propaganda.
It also lets you do basic signals analysis to determine the magnitude of influence operations on any given site relative to others.

Just example the AR15 forums are almost certainly directly under gag order and continuous data exchange with the federal government, if not an outright intelligence front against the public.

This system and method has worked for me over and over again, yielding heads up, clues, and clear indicators before events even happen. In some respects the model I use is more advanced than what the intelligence agencies use, because they lack critical models of core data and the methodology to prepare their daily data sets. They can predict medium term trends (say 24 months out) with their systems, on a regional basis, but they can't predict say, individual actions, except under extreme circumstances.

I knew my system worked when the chatter was reporting correlations for a shooting in D.C. the night of, and I came down from my office to the news reporting a shooting..in D.C.

It's been like that ever since.

[ - ] deleted 0 points 5 monthsNov 29, 2023 14:23:56 ago (+2/-2)

deleted

[ - ] Gowithit 2 points 5 monthsNov 29, 2023 14:33:16 ago (+2/-0)

Really? Around dinner time? Q's a dick.