(This chapter goes to some very strange and dark places, including psychosis and suicide. What follows is largely a curated compilation of blog posts and comments, i.e. an online oral history. After consideration I've decided to generally not include unsubstantiated secondhand hearsay accusations such as “I heard X did Y to Z,” but you can find plenty if you dig deeper. Which claims to trust, who to believe, where to be skeptical, where to be derisive ... is up to you. My own confidence levels admit considerable room for error. Yours probably should too.)
[See Chapter 1 for full backstory; it depicted the growth of the rationalism movement around the work of Eliezer Yudkowsky, who founded the Singularity Institute, and whose “Sequences” describing rationalism seeded the online community LessWrong.]
i. Leverage
Leverage Research was introduced to LessWrong in January 2012, a year after its launch. Its founder Geoff Anders wrote:
Because of our ties to the Less Wrong community and our deep interest in rationality, I thought it would be good to formally introduce ourselves. [...] Over half of our people come from the Less Wrong / Singularity Institute community [...] Our goal at Leverage is to make the world a much better place, using the most effective means we can [...] One of our projects is existential risk reduction ... persuading people to take the risks of artificial general intelligence (AGI) seriously [...] A second project is intelligence amplification [...] We have several others, including the development of rationality training.
This message was posted not by Anders himself, but by the Singularity Institute's executive director on Anders's behalf. They could hardly have made Leverage Research's close connections to the rationalist community any clearer.
(Final author's note, I promise: on the Internet no one knows you're a dog, attribution is hard. It is possible, albeit generally unlikely, that text apparently posted by one person was actually written by someone else. Here I write “X said Y,” not “Y was ascribed to a user purportedly named X.” You may mentally speckle ‘allegedly’ everywhere if it makes you feel better.)
Nine years later, a woman named Zoe Curzi published a Medium post entitled “My Experience with Leverage Research.” In the meticulous, prolix, semi-detached tone common to most LessWrong posts, she writes about her post-Leverage PTSD; why she took so long to write about the organization she left in 2019; some benefits of membership; and then, a remarkable section whose key elements are retained below —
A Taste of How Out of Control This Got
To give you a picture of where the culture eventually ended up, here's a list of some things I experienced by the end of my time there:
2-6hr long group debugging sessions in which we as a sub-faction(Alignment Group) would attempt to articulate a "demon" which had infiltrated our psyches from one of the rival groups, its nature and effects, and get it out of our systems using debugging tools.
People in my group commenting on a rival group having done "magic" that weekend and clearly having "powered up," and saying we needed to now go debug the effects of being around that powered-up group, which they said was attacking them or otherwise affecting their ability to trust their own perception
Accusations being thrown around about people leaving "objects" (similar to demons) in other people, and people trying to sort out where the bad "object" originated and if it had been left intentionally/malevolently or by accident/subconsciously.
People doing séances seriously. I was under the impression the purpose was to call on demonic energies and use their power to affect the practitioners' social standing.
Former rationalists clearing their homes of bad energy using crystals.
Much talk & theorizing on "intention reading," which was something like mind-reading.
I personally went through many months of near constant terror at being mentally invaded.
I personally prayed for hours most nights for months to rid myself of specific "demons" I felt I'd picked up from other members of Leverage.
If this sounds insane, it's because it was. It was a crazy experience unlike any I've ever had. And there are many more weird anecdotes where that came from.
[...]
People (not everyone, but definitely a lot of us) genuinely thought we were going to take over the US government.
The main mechanism through which we'd save the world was that Geoff would come up with a fool-proof theory of every domain of reality.
Geoff estimated that there were roughly 10 "super weapons" or "super theories." He said we already had 1—2. [...] we had [mostly] solved philosophy ... we had the One True Theory of psychology.
The explicit strategy for world-saving depended upon a team of highly moldable young people self-transforming into Elon Musks.
The full post goes into more depth and nuance, but it's worth stressing how heavily Leverage relied on “debugging,” based on Anders's “Connection Theory.” Debugging consisted of — to paraphrase and oversimplify — opening up one's psyche, history, self, and emotional core to maximum vulnerability, generally to one's hierarchical superior, and then doing all you could to follow their suggestions to “fix” your mind.
Much of the broader rationalist community had already grown critical of Leverage. In 2018, a LessWrong post had argued that, 100 person-years and $2 million later, all Leverage had to show was seven blog posts and two Effective Altruism summits. A former Leverage employee was identified as a white nationalist. In 2021, another post listed problematic aspects of Leverage, saying: “I feel it is important to make these particular facts more legibly known ... because these pertain to the characteristics of a "high-demand group" (which is a more specific term than "cult").”
Those 2021 posts mostly referred to events between 2017 and 2019. By the time they emerged, Leverage 1.0 had already disbanded, replaced by Leverage 2.0, much smaller and under new management. (Leverage 1.0 had previously spun off a for-profit startup Paradigm, which launched a cryptocurrency, because of course it did.) In the wake of Curzi's demons-and-mental-invasion post, an account was set up to receive anonymous submissions from other Leverage members. Those published make for interesting reading, including:
“There briefly was an Occult Studies group”
“While at Leverage/Paradigm I was aware of many distressed people and multiple people who reported paranoia or even claimed omnipotence”
An “Information Management Checklist” which begins “The first purpose is to prevent people who might cause significant harm to the world from gaining information that would help them to cause such harm.”
[An aside: this surreal comment, from a self-proclaimed non-rationalist who had previously, inadvertently, become an erotic-hypnosis cult leader(!), suggests Leverage was structured such that becoming a cult, intentionally or not, was inevitable. This is an interesting notion. It is also, bizarrely, not the first time your narrator has encountered intimations of the existence of a deeply problematic Bay Area erotic-hypnosis underworld.(!!) On the previous occasion it focused on an individual who was, briefly, quite high profile. I won't get into it, as it doesn't seem connected, and also, when I submitted that previous story for potential publication, lawyers were immediately summoned. Odd coincidence? Iceberg tip? Who can say?]
Most of the community's exegesis of Curzi's post took place, at considerable length, in the comments on a LessWrong post created for that purpose. Some selections:
“To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously — from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them.”
“For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced.”
“After discussing the matter with some other (non-Leverage) EAs [effective altruists], we've decided to wire $15,000 to Zoe Curzi (within 35 days). This gift to Zoe is an attempt to signal support for people who come forward with accounts like hers ... We've temporarily set aside $85,000 in case others write up similar accounts”
“But also, I think pretty close to ZERO people who were deeply affected (aside from Zoe, who hasn't engaged beyond the post) have come forward in this thread. And I... guess we should talk about that. I know from firsthand, that there were some pretty bad experiences in the incident that tore Leverage 1.0 apart, which nobody appears to feel able to talk about.”
(From Geoff Anders): “Zoe - I don't know if you want me to respond to you or not, and there's a lot I need to consider, but I do want to say that I'm so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience. [...] Clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance.” (Anders went on to publish a lengthy letter of apology and response.)
“For someone familiar with Scientology, the similarities are quite funny. There is a unique genius who develops a new theory of human mind called [Dianetics | Connection Theory]. [...] The genius starts a group with the goal of providing almost-superpowers such as [perfect memory | becoming another Elon Musk] to his followers, with the ultimate goal of saving the planet. The followers believe this is the only organization capable of achieving such goal. They must regularly submit to having their thoughts checked at [auditing | debugging], where their sincerity is verified using [e-meter | Belief Reporting]. When the leaders runs out of useful or semi-useful ideas to teach, there is always the unending task of exorcising the [body thetans | demons].”
“A 2012 CFAR workshop included "Guest speaker Geoff Anders presents techniques his organization has used to overcome procrastination and maintain 75 hours/week of productive work time per person." He was clearly connected to the LW-sphere if not central to it.”
“Personally I think this story is an important warning about how people with a LW-adjacent mindset can death spiral off the deep end. This is something that happened around this community multiple times, not just in Leverage (I know of at least one other prominent example and suspect there are more), so we should definitely watch out for this and/or think how to prevent this kind of thing.”
The evidence suggests that this last comment is 100% correct; that while rationalism itself is absolutely not a cult, religious terror and cultlike behavior are failure modes into which rationalist thought can slip with shocking (and counterintuitive) ease. It's unclear whether there is any causation here, or only correlation ... but the quantity of correlation is substantial. Consider, for instance, the events to which the last comment refers — the far weirder events, culminating in a scrambled police helicopter and a cluster of criminal arrests, of November 2019 —
ii. The Zizians
The San Francisco Chronicle, November 18, 2019: “Sonoma County authorities are investigating a bizarre protest against a Berkeley nonprofit focused on rational thought after four people dressed in black robes and Guy Fawkes masks were arrested for allegedly barricading off a wooded retreat where the nonprofit group was holding an event.”
There were five, actually, at a camp a few miles from infamous Bohemian Grove, but only four were arrested, for “felony child endangerment, felony false imprisonment, felony conspiracy, misdemeanor resisting arrest, wearing a mask while committing a crime, and trespassing.” (The fifth was released after claiming to be a journalist.) They were apparently there to protest “artificial intelligence and the Center for Applied Rationality ... as well as the Machine Intelligence Research Institute.”
Eliezer Yudkowsky's “Singularity Institute for Artificial Intelligence” renamed itself the “Machine Intelligence Research Institute,” aka MIRI, in 2013. CFAR is trickier to explain. According to a 2016 New York Times piece, “CFAR began as a spinoff of MIRI ... Yudkowsky found people struggled to think clearly about AI risk and were often dismissive. [CFAR's president Anna] Salamon, who had been working at MIRI since 2008, volunteered to figure out how to overcome that problem.” The result was CFAR. Salamon described its mission as: “to help people develop the abilities that let them meaningfully assist with the world's most important problems, by improving their ability to arrive at accurate beliefs, act effectively in the real world, and sustainably care about that world.” Basically, to oversimplify, CFAR was a rationalism training institute.
As such it made sense for those protesting MIRI to protest CFAR as well. But this was no ordinary protest. Quite apart from the black robes and Guy Fawkes masks, the protestors' pamphlets were profoundly weird. Along with semi-coherent accusations aimed at MIRI and CFAR were lines like
“recipe for escaping containment by society ... interhemispheric game theory (work in progress)”
“Claiming we're safe because good would destroy hell puts the cart before the horse. Dark gods whisper lies of logical order”
“Understand your source code and that of others: this is a way to have verifiably un-DRMed knowledge to build on. Hemispheres. Undead types.”
(Note to future singularitarian protestors: probably don't put "(work in progress)" anywhere on your protest literature, as admirably candid as it may be.)
...So what the hell happened in Sonoma County that night?
Conveniently there is an entire web site devoted to answering that question: zizians.info. Its pseudonymous author accuses a ringleader — a cult leader, really — called Ziz; a woman who moved to the Bay Area to work on AI risk, grew embittered, and “accused CFAR and MIRI of sexual and financial crimes.” It seems that she and her coterie, called the Zizians, demonstrated in Guy Fawkes masks in Sonoma County that night not so much as a protest as an attempt to recruit from the CFAR attendees. (This makes sense, as the attendees would have been a sizable fraction of those alive capable of making any sense of the Zizians' literature.)
Ziz was apparently also, remarkably, a hemispherist:
Ziz writes that each person has two cores made up of their left and right hemispheres. Each of these is considered a full person, and when Ziz says "person" she really means an animal's core.
This is somewhat less nuts than it may sound. Experiments do show that when the corpus callosum connecting a brain's hemispheres is severed, the hemispheres go on to make decisions independently. But, to put it very mildly, the scientific consensus does not support the Zizian notion that we are all really two different people uncomfortably yoked together. Ziz reportedly tried to induce hemispheric schisms, and get others to as well, via “unihemispheric sleep,” which is
achieved by stimulating one half of the body and resting the other. Like hypnosis or fasting, this is a vulnerable psychological state for a person. It also has disorienting effects so they are not quite themselves. The Zizians exploit this state to convince the unwary that they are actually two people.
Zizians.info goes on:
When Ziz talks about punishing people who will build hell, she really means people who are singularitans without being vegan [...] According to someone familiar with the matter, Ziz played a large role in the death [by suicide] of Maia [a frequent LessWrong contributor] [...] Zizians encourage the adoption of a new identity, giving a name to each hemisphere (e.g., Ziz's left hemisphere "Xalis"; right hemisphere "Yanrae"). They also encourage new members to make significant and often uncharacteristic lifestyle changes.
The web sites of Ziz and her apparent compatriots do not exactly go out of their way to dispute the above depictions, which are further supported by Jessica Taylor — a Stanford CS graduate about whom much more momentarily — who wrote:
Maia had been experimenting with unihemispheric sleep. Maia (perhaps in discussion with others) concluded one brain half was "good" in the Zizian-utilitarian sense of "trying to benefit all sentient life, not prioritizing local life," and the other half was [...] "selfish, but trying to cooperate with entities that use a similar algorithm to make decisions." These different halves of Maia's brain apparently got into a conflict, due to their different values. One half (by Maia's report) precommitted to killing Maia's body under some conditions. This condition was triggered, Maia announced it, and Maia killed themselves."
...You'll note this all seems
deeply sad
completely batshit
impressively inconsistent with the idea of rationalism "forming true beliefs and making decisions that help you win." (Worth noting: "precommitted" is a concept from decision theory, a branch of philosophy with which rationalism is deeply concerned.)
Rather, it seems further evidence that people attracted to rationalism are disproportionately prone to “death spirals off the deep end” ... and it at least leaves open the possibility that rationalism can actively worsen them. Admittedly, so far we've only talked about fellow travelers and splinter groups —
— but Jessica Taylor was at MIRI itself.
iii. MIRI / CFAR / Vassar / Slate Star
A few days after Curzi's post, inspired by it, Taylor posted “My experience at and around MIRI and CFAR” to LessWrong. It is no less remarkable. Again, edited highlights follow. (They will still seem long. The original post is very long, the comments are much longer, and ... look, all these people just constantly emit vast profusions of words, many of them needless. I'm doing my best to condense them into a tight narrative, honest.)
Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).
I was faced with a decision between Google and MIRI. I knew that at MIRI [...] I'd get an opportunity to work with smart, ambitious people, who were structuring their communication protocols and life decisions around the content of the LessWrong Sequences. [...] When I began at MIRI (in 2015), there were ambient concerns that it was a "cult"; this was a set of people with a non-mainstream ideology that claimed that the future of the world depended on a small set of people that included many of them.
[My] psychotic break was in October 2017 [...] I was placed in 1-2 weeks of intensive psychiatric hospitalization, followed by 2 weeks in a halfway house. This was followed by severe depression lasting months. [...] During this time [apparently meaning the hospitalization] [...] I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.
There were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR [...] in addition, a case of someone becoming very paranoid, attacking a mental health worker [...] concerned about a demon inside him, implanted by another person, trying to escape [...] two cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement...
A prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years [...] the AI timelines shortening triggered an acceleration of social dynamics. MIRI became very secretive about research [...] I was endangering the world by talking openly about AI in the abstract [...] I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner...
There were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR.
I remember taking on more and more mental "responsibility" over time, noting the ways in which people other than me weren't sufficient to solve the AI alignment problem, and I had special skills, so it was uniquely my job to solve the problem. [And, therefore, save humanity from extermination and the world from ending.] [...] It was considered important to psychologically self-improve to the point of being able to solve extremely hard, future-lightcone-determining [i.e. fate-of-humanity] problems[...]
There was certainly a power dynamic of "who can debug who."
Note, in particular, that “debugging” reportedly occurred at MIRI/CFAR as well as Leverage. In comments, another MIRI alumnus generally supported these claims. Taylor subsequently edited her post to apparently endorse an anonymous comment which claimed further:
I'm a present or past CFAR employee commenting anonymously to avoid retribution. [...] At least four people who ... worked in some capacity with the CFAR/MIRI team had psychotic episodes. [...] Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file.
Debugging sessions with [...] leadership were nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. [...] The organization uses a technique called goal factoring during debugging which was in large part inspired by Geoff Anders' Connection Theory and was actually taught by Geoff at CFAR workshops at some point. This means that CFAR debugging in many ways resembles Leverage's debugging and the similarity in naming isn't just a coincidence of terms.
The overall effect [...] was that it was hard to maintain the privacy and integrity of your mind if you were a rank-and-file employee at CFAR. [...] The longer you stayed with the organization, the more it felt like your family and friends on the outside could not understand the problems facing the world, because they lacked access to the reasoning tools and intellectual leaders you had access to. This led to a deep sense of alienation.
There was rampant use of narrative warfare (called "narrativemancy" within the organization) by leadership to cast aspersions and blame on employees and each other. There was frequent non-ironic use of magical and narrative schemas which involved comparing situations to fairy-tales or myths and then drawing conclusions about those situations with high confidence. [...]
At social gatherings you felt uncertain at times if you were enjoying yourself at a party, advocating for yourself in an interview, or defending yourself on trial for a crime.
But wait, there's more! Taylor wrote a follow-up post, “Occupational Infohazards,” two months later. (“Infohazards” will be explored in a subsequent chapter.) Some choice bits:
My official job responsibilities as a researcher at MIRI implied it was important to think seriously about hypothetical scenarios, including the possibility that someone might cause a future artificial intelligence to torture astronomical numbers of people ... My psychotic break in which I imagined myself creating hell was a natural extension of this line of thought. [...] It seemed at the time that MIRI leaders were already encouraging me to adopt a kind of conflict theory in which many AI organizations were trying to destroy the world on <20-year timescales.
Negative or negative-leaning utilitarians, a substantial subgroup of Effective Altruists (especially in Europe), consider "s-risks", risks of extreme suffering in the universe enabled by advanced technology, to be an especially important class of risk [...] These considerations were widely regarded within MIRI as an important part of AI strategy.
The comments on both of her posts are, again, absurdly long. Yudkowsky commented, clearly appalled. But the comment I wish to highlight is by Scott Alexander — yes, that Scott Alexander, about whom much more anon — who writes:
I want to add some context I think is important to this.
Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.
Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing. [..] Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. [...] Jessica talks about a cluster of psychoses from 2017 - 2019 which she blames on MIRI/CFAR. She admits that not all the people involved worked for MIRI or CFAR, but kind of equivocates around this and says they were "in the social circle" in some way. The actual connection is that most (maybe all?) of these people were involved with the Vassarites or the Zizians (the latter being IMO a Vassarite splinter group, though I think both groups would deny this characterization). The main connection to MIRI/CFAR is that the Vassarites recruited from the MIRI/CFAR social network.
Then, remarkably:
EDIT/UPDATE: I got a chance to talk to Vassar, who disagrees with my assessment above. We're still trying to figure out the details, but so far, we agree that there was a cluster of related psychoses around 2017, all of which were in the same broad part of the rationalist social graph. Features of that part were - it contained a lot of trans women, a lot of math-y people, and some people who had been influenced by Vassar, although Vassar himself may not have been a central member
Update: I interviewed many of the people involved and feel like I understand the situation better. My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.
Let's briefly pause to appreciate the appearance of someone who conveys key points in a few crisp paragraphs.
And then let's note that, summarizing, even according to Alexander (who Taylor disputes in her second post), the Bay Area rationalist community of circa 2017-19 — estimated at “conservatively 500 people,” not exactly a massive horde! — hosted multiple clusters of people who experienced sharp breaks from consensus reality; at least four suicides; and at least three firsthand reports, all from clearly brilliant young women, implying / claiming cult-like activity / exploitation.
As Auric Goldfinger might say: “Mr. Bond, they have a saying in Eliezer's hometown. Once is happenstance. Twice is coincidence. The third time, it's enemy action.”
iv. How and why did all this happen?!
…On the desert road into Burning Man, the counterculture art festival (etc.) frequently attended by much of the Bay Area (including your narrator and many rationalists), one passes signs saying: “Burning Man Is A Self-Service Cult. Wash Your Own Brain.” This is funny, and not entirely wrong, with respect to Burning Man. It is much less funny, because it seems much less wrong, when it comes to vulnerable people attracted to rationalism.
“Death spirals” is actually a rationalist term of art. There is an entire Sequence which seeks to explain how this all could have happened, entitled “Death Spirals and the Cult Attractor.” Ironically, discussion of cults (and their avoidance) was so common on LessWrong that “there was concern that discussion of cults on the site would cause LessWrong to rank highly in Google search results for "cult"; it was therefore recommended that people instead use the word "phyg" to discuss cults.” “People accuse us of worshipping Eliezer,” a LessWronger writes. Another mocks the notion he's been brainwashed by effective altruists / rationalists ... and in so doing makes it clear the accusation is common.
LessWrong often collectively seemed to hold a strangely naïve view of the rest of the world. Jessica Taylor wrote things like “As far as I can tell, normal corporate management is much worse than Leverage.”(!) and “it [was] consistent with standard business practices for [my manager] to prevent me from talking with people who might distract me from my work; this goes to show the continuity between "cults" and "normal corporations."(!!)” Those of us who have worked at normal corporations — as suboptimal as they can be — may take some time here to retract our jaws from the floor. And hers are far from the only such examples.
Rationalism / Bayesianism is an often-very-useful mode of thinking, but one which seems to come with worrying failure modes. It's true that, to its great credit, rationalism is theoretically all about changing your beliefs in the face of new evidence, proportionate to the strength of that evidence. On paper this should make it resistant to cultlike beliefs. But the Sequences don't have much to say about how, when, or why to gather new evidence to challenge “your priors,” i.e. your current beliefs. They do stress, however, that — given sufficient evidence, a prerequisite often glossed over — rationalism is not merely one tool in the cognitive toolbox (which is my own view) but the optimal way of thinking. It isn't hard to see how this can be misinterpreted into a fervent belief that whatever conclusions one reaches with one's self-proclaimed rationalist thinking must, therefore, be definitionally correct.
Critics have long dismissed rationalism as an outright cult. The nuanced reality seems much sadder: that the rationalists approached community-building in naïve good faith, but there was a dangerous gap — an abyss — between their theory and their results. For years, they failed to close that abyss, or somehow didn’t notice it, or pretended it didn't exist. And in that time, an appalling number of vulnerable people fell into it ... some of whom, engulfed by its demons, never emerged.
There's much more to say about “mainstream” rationalism, but these surreal tragedies are both a mesmerizing story and irresponsible to ignore. This was a grim chapter, I know. Sorry. Let me reassure you; the disturbing stuff is now mostly behind us. In the next chapter we climb into a new rabbit hole, one far more fun and scarcely less weird: that of the birth of cryptocurrencies.