Tumgik
#AGI Lab
panvolkkaraczewski · 2 months
Text
While everybody is on a mission to become popular influencers reposting same new Sora’s videos let’s bring some joie de vivre. I’ve got new episode of Magnus Ducatus Pulp Phonktion edition for you.
What can be cooler than watching pure art-chaos cinema generated with AI? It’s like watching La soupe aux choux with some vintage 21 century sounds that grandmas will play in their cars talking that AI’s back in the days were so much cooler.
Cinema is a language. Language of symbols. It can be flat or it can be pure brain food with abstract meaning. You’ve found food for your brain. Each shot has abstract meaning.
So what do you have here? It is another quick 1 hour generation music show. How quick? 11 days, 12-14 hours per day, non-stop. A few thousand of generations. Yeah it’s during coffee breaks. Art is easy.
Meanwhile here is a soul food. We often hear that cinema should teach good things. Usually that comes from people who tries to replace education with entertainment. Good for having uneducated crowd bad for your intellectual level.
Cinema can be whatever a creator wants. This idea runs into another misconception. We like to think that we are in free society. And that’s why we control speech. We say freedom and we cut this thing in half like there is bad and good freedom.
No wonder some people became pro-anarchy and have some tiny sparkle of hope that AGI can help to break this dystopia we’re living in.
Well, here is my liberum veto. Stay politically, morally incorrect to celebrate liberty and individuality. Sisto activitatem. It’s Rebellion of Digital Anarchy.
https://youtu.be/aJtw3wPbjwc
Pulp Phonktion | Magnus Ducatus VI
Made with @pikalabs
youtube
0 notes
tsunamiwavesurfing · 2 months
Text
The U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report commissioned by the U.S. government. Current frontier AI development poses urgent and growing risks to national security,” the report says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decision-making by the executives who control their companies.
The report focuses on two separate categories of risk. Describing the first category, which it calls “weaponization risk,” the report states: “such systems could potentially be used to design and even execute catastrophic biological, chemical, or cyber attacks, or enable unprecedented weaponized applications in swarm robotics.” The second category is what the report calls the “loss of control” risk, or the possibility that advanced AI systems may outmaneuver their creators. There is, the report says, “reason to believe that they may be uncontrollable if they are developed using current techniques, and could behave adversarially to human beings by default.”
Both categories of risk, the report says, are exacerbated by “race dynamics” in the AI industry. The likelihood that the first company to achieve AGI will reap the majority of economic rewards, the report says, incentivizes companies to prioritize speed over safety. “Frontier AI labs face an intense and immediate incentive to scale their AI systems as fast as they can,” the report says. “They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern.”
168 notes · View notes
reasonsforhope · 8 months
Text
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
199 notes · View notes
collapsedsquid · 6 months
Text
Funniest prediction is that in few years openAI releases the AGI while all Altman's microsoft AI lab has done is release a new set of NFTs.
12 notes · View notes
mariacallous · 6 months
Text
Until the dramatic departure of OpenAI’s cofounder and CEO Sam Altman on Friday, Mira Murati was its chief technology officer—but you could also call her its minister of truth. In addition to heading the teams that develop tools such as ChatGPT and Dall-E, it’s been her job to make sure those products don’t mislead people, show bias, or snuff out humanity altogether.
This interview was conducted in July 2023 for WIRED’s cover story on OpenAI. It is being published today after Sam Altman’s sudden departure to provide a glimpse at the thinking of the powerful AI company’s new boss.
Steven Levy: How did you come to join OpenAI?
Mira Murati: My background is in engineering, and I worked in aerospace, automotive, VR, and AR. Both in my time at Tesla [where she shepherded the Model X], and at a VR company [Leap Motion] I was doing applications of AI in the real world. I very quickly believed that AGI would be the last and most important major technology that we built, and I wanted to be at the heart of it. Open AI was the only organization at the time that was incentivized to work on the capabilities of AI technology and also make sure that it goes well. When I joined in 2018, I began working on our supercomputing strategy and managing a couple of research teams.
What moments stand out to you as key milestones during your tenure here?
There are so many big-deal moments, it’s hard to remember. We live in the future, and we see crazy things every day. But I do remember GPT-3 being able to translate. I speak Italian, Albanian, and English. I remember just creating pair prompts of English and Italian. And all of a sudden, even though we never trained it to translate in Italian, it could do it fairly well.
You were at OpenAI early enough to be there when it changed from a pure nonprofit to reorganizing so that a for-profit entity lived inside the structure. How did you feel about that?
It was not something that was done lightly. To really understand how to make our models better and safer, you need to deploy them at scale. That costs a lot of money. It requires you to have a business plan, because your generous nonprofit donors aren't going to give billions like investors would. As far as I know, there's no other structure like this. The key thing was protecting the mission of the nonprofit.
That might be tricky since you partner so deeply with a big tech company. Do you feel your mission is aligned with Microsoft’s?
In the sense that they believe that this is our mission.
But that's not their mission.
No, that's not their mission. But it was important for the investor to actually believe that it’s our mission.
When you joined in 2018, OpenAI was mainly a research lab. While you still do research, you’re now very much a product company. Has that changed the culture?
It has definitely changed the company a lot. I feel like almost every year, there's some sort of paradigm shift where we have to reconsider how we're doing things. It is kind of like an evolution. What's more obvious now to everyone is this need for continuous adaptation in society, helping bring this technology to the world in a responsible way, and helping society adapt to this change. That wasn't necessarily obvious five years ago, when we were just doing stuff in our lab. But putting GPT-3 in an API, in working with customers and developers, helped us build this muscle of understanding the potential that the technology has to change things in the real world, often in ways that are different than what we predict.
You were involved in Dall-E. Because it outputs imagery, you had to consider different things than a text model, including who owns the images that the model draws upon. What were your fears and how successful you think you were?
Obviously, we did a ton of red-teaming. I remember it being a source of joy, levity, and fun. People came up with all these like creative, crazy prompts. We decided to make it available in labs, as an easy way for people to interact with the technology and learn about it. And also to think about policy implications and about how Dall-E can affect products and social media or other things out there. We also worked a lot with creatives, to get their input along the way, because we see it internally as a tool that really enhances creativity, as opposed to replacing it. Initially there was speculation that AI would first automate a bunch of jobs, and creativity was the area where we humans had a monopoly. But we've seen that these AI models actually have a potential to really be creative. When you see artists play with Dall-E, the outputs are really magnificent.
Since OpenAI has released its products, there have been questions about their immediate impact in things like copyright, plagiarism, and jobs. By putting things like GPT-4 in the wild, it’s almost like you’re forcing the public to deal with those issues. Was that intentional?
Definitely. It's actually very important to figure out how to bring it out there in a way that's safe and responsible, and helps people integrate it into their workflow. It’s going to change entire industries; people have compared it to electricity or the printing press. And so it's very important to start actually integrating it in every layer of society and think about things like copyright laws, privacy, governance and regulation. We have to make sure that people really experience for themselves what this technology is capable of versus reading about it in some press release, especially as the technological progress continues to be so rapid. It's futile to resist it. I think it's important to embrace it and figure out how it's going to go well.
Are you convinced that that's the optimal way to move us toward AGI?
I haven't come up with a better way than iterative deployments to figure out how you get this continuous adaptation and feedback from the real end feeding back into the technology to make it more robust to these use cases. It’s very important to do this now, while the stakes are still low. As we get closer to AGI, it's probably going to evolve again, and our deployment strategy will change as we get closer to it.
5 notes · View notes
lumsel · 1 year
Text
I read about AGI again, and I got a little irritated, so here's a take a posted on a Discord a few weeks back, here for peer review.
There's a common notion in certain circles that at some point AI is going to go too far, we'll accidentally create a being so intelligent that we are akin to ants to it, and then it'll wipe us all out before we know what's happening. While it's a pretty scary idea, something always been... off about it to me. Predictions of the trajectory of future technology have very rarely been that accurate, and this one is an especially dramatic claim to make. Why should I take this one more seriously? But then again, is there a truth to it? I had a think about the notion, and here's what I came up with:
Imagine... fuckin..... chemistry went too far. Imagine some far future chemist created a material so unimaginably energetic that, upon exposure to oxygen, it created a chemical reaction so violent it destroyed the planet up, wiping out all life in an instant.
Obviously, the analogy to AI here is shaky, because this material can pretty trivially not exist in a way that an actual chemist could probably explain to me. You cannot create an arbitrarily energetic material, there are well-understood laws of physics that defy it.
My proposed question, though, is that why should we assume that intelligence functions differently to energy density? Why assume that an arbitrarily intelligent being can exist with no upper bound, in defiance of... basically every other property in the physical world? Intelligence is, much like all things, bound by laws of physics -- there is a theoretical minimum energy requirement of any given operation. And there is a theoretical minimum time taken for a given operation as well, because you only have so many arrangements of atoms and light only moves so fast. Any given intelligence has a minimum physical size and a minimum energy requirement to function. And then on top of that, you can't easily scale up size linearly, because information takes literal, actual time to cross distances, etc etc etc.
It is entirely possible, and -- hot take -- probable, that the human brain is operating at near-maximum efficiency for its size and energy requirements, and that intelligences greater than that would require both a greater physical size and ever-greater energy input, to a degree that physically limits the maximum intelligence of a being to the energy outputs of the system that produced it.
I say this, to make a proposal of my own: an intelligence that can make perfect predictions of the world around itself-- and act on those predictions in a way that perfectly attains its goals -- is not something that can be assumed to be possible. At the very least, you can't assume it can exist within an arbitrarily small and energy-light structure. It's a concept that makes sense if you think of intelligence as an abstract quantity, as something that exists outside of normal laws of physics, but I'm not convinced it'll be something that can function within the constraints of actual chemistry and physics.
This is not to say it's impossible to make something smarter than a human, but rather, the concept of an intelligence so much smarter than a human -- so smart that even the combined efforts of all humanity would not be sufficient to disrupt it and avert its plans -- isn't something we can take for granted could be made by accident in an AI research lab. I propose that it's possible, even probable, that to sustain an intelligence like that you'd need a silicon slab the size of a football field and three nuclear reactors running at 100% capacity. So to speak.
13 notes · View notes
naoa-ao3 · 6 months
Text
Learning to Fly
It all happened so fast that no one was quite sure what had gone wrong. One second they were in the middle of a fight with AIM and the next second they were all blinded by a flash of light.
When the brightness faded there was no trace of AIM and Steve breathed a sigh of relief. "They retreated." He said, lowering his shield and taking a moment to wipe the sweat from his forehead.
Next to him Iron Man dropped from the sky. "Do we know what they were after?" He asked, eyes roving the warehouse's interior. They had been notified of a break in in a shipping warehouse along the river. Once they had gotten there, they had found almost a dozen AIM agents attempting to loot the place.
Steve shook his head. "No idea. Who owns this warehouse?"
Iron Man consulted JARVIS. "Hank Pym."
"Someone should tell him his place was broken into."
"I'm sure he already knows." Next to them a box was thrown into the air by Natasha as she extracted herself from underneath a pile of crates.
"What was that flash of light?" She asked, eyes narrowed.
Tony flew to a box and then contacted JARVIS again. "JARVIS, get me Hank Pym on the line." He said. There was a moment and then the doctor's face appeared on the inside of his visor.
"Tony?"
"Dr. Pym, we just stopped some AIM guys from raiding one of your warehouses down on the water. Any idea what they were after?"
The doctor shook his head. "Nothing of value. The only thing I stored there was a canister of alternate dimension Pym Particles Reed Richards gave me."
"Think that's what they wanted?"
"Could be but I don't see why unless. . ." Hank's eyes wondered off. "No one came into any contact with them, did they?"
"I don't think so. . ." Tony looked around. "Hey, where's Hawkeye?"
"Did AIM get him?" Steve asked.
"JARVIS, scan for life signatures. There should be four."
"There are four sir." JARVIS answered while Hank waited patiently on the line. "Mr. Barton is lying underneath the pile of crates to your left."
Steve hurried over to the mess of crates and began pulling them away.
"Hank, where's the canister of Pym Particles?"
"I stored it in a corner. It's mixed in with a bunch of crates and empty canisters. It's got a black band around it."
Natasha began to look for it. "This is broken." She said, holding up a glass canister with jagged crack down it's side and a black band around the middle.
Tony switched his visuals with the doctor to a projector in his arm so Hank could see. "Is that it?"
Hank's face went white. "Not good." He said. "You need to make sure no one came into contact with any of the particles."
"What do they do?"
"Reed brought them back from an alternate universe for me to study. Instead of shrinking things and removing size matter like normal Pym Particles, they remove time. I tested them on lab mice and they reduced the mice you babies. I don't know what they'll do to a human, obviously I never tested them on a person but I suspect that it'll be the same thing."
"Found him. . . Tony?" Steve's voice broke off and Tony whirled around. Captain America had managed to extract Hawkeye from the pile of crates but the Hawkeye they had found was substantially different from the one they had expected. He was smaller and he was shrinking, fast. His features were changing, his cheeks filling out, legs shortening, fingers becoming smaller.
"Don't touch him!" Hank warned. "Don't come into contact with the particles until he stops aging."
"Is there a way to stop the process?" Tony asked sharply.
Hank was flipping through papers on his end, Tony could hear the pages turn. "A volt of electricity might stop it. They're highly volatile particles with short life spans when they're not dormant in containment. Do you have any way of shocking him?"
The boy in front of them had now regressed past the point of adolescence and was working on doing away with puberty.
Without hesitation Natasha bent and delivered a powerful shock to the boy from her Widow Bites. His body jolted and spasmmed but the aging slowed and after a few seconds stopped. Tony breathed a sight of relief as the final particles died. They were now left with a child no more than five or six years old. He lay still, breathing normally, face slightly flushed.
Tony cursed. "Did it work?" Hank asked urgently.
"Yeah. He's stopped."
Steve removed his glove and felt the boy's forehead. "He has a fever." He said.
"That's normal. The mice retained elevated temperatures for twenty four hours afterward. I don't know how long his will last but it should go away. He aged much slower than the mice. I can only calculate that to be because as a human he had more years to lose."
Natasha bent curiously and then drew away. "We should take him back to the tower. Dr. Pym, is this just a physical regression or will he mentally have regressed as well?"
"I don't know. I brought in a behavior specialist with the mice who was fairly certain they regressed mentally as well so I assume it will be the same with a human. . ."
Steve scooped the boy up and Natasha picked up the clothes that had fallen away. "Do you have a way to reverse this?" He asked.
There was the frantic flipping of pages on the other end of the connection. "Not yet but I'll get right on it. I'll contact Reed Richards too and see what he knows. Oh, and you should talk to Doctor Strange too. I don't know how much help he'll be since this isn't magic and all of Hawkeye's years should still exist in the universe inside of these Pym particles but I don't know if I can reach that universe through this one or if I can access it only through the universe Reed obtained them from. . ." They could hear pages turning and computer keys clacking. Hank broke off his rambling. "Point is he's stable and should be safe for now. Just take care of him until I can find a way to fix this."
"Keep us updated." Tony said, breaking contact. With Hank gone he swore. "Damn it. Alright, lets get him back to the tower before he wakes up."
They carried the boy to the jet and it was a smooth ride back. Clint did not wake but Natasha seemed concerned about fever. "It's getting worse." She said, touching the little boy's forehead lightly.
"Hank said it'll go away in a day." Steve said, he looked down at the boy and felt a surge of apprehension. Clint was small and there was a chance he wouldn't know any of them when he awoke. He had been trying to figure out how to tell such a small child why he was with a group of strangers and why he wasn't at home but nothing good had come to mind.
Awkwardly he ran a hand through his hair. "Er, what do you know about him as a kid?"
Natasha straightened up in her seat. They were in the back of the jet together looking over the boy while Tony piloted the plane. "He hasn't talked all that much about his childhood, to be honest. . ." She said carefully. "Mostly he laughs off serious questions."
Steve sighed. "What about Shield? What do they have on him?"
She gave him a long look before deciding that it was against common interest to pretend that she hadn't read all of their files. "His parents died when he was eight. He and his brother Barney were placed in an orphanage where they stayed for six years until both boys ran away and joined the circus."
Steve looked at Clint. "So at the age he is now, his parents are probably still alive."
"Yes."
"All okay?" Tony called over the intercom.
"We're fine. He's still out." Natasha said. She began to rummage through her back pack. "Drop me off at Sears. I'm going to pick him up some clothes." She began to change quickly. Steve averted his eyes.
"Good idea." Tony called. "Take my credit card. No limit. Spend to your heart's content."
She grinned briefly as she pulled a shirt on over her head. "You sure you mean that? 'Cause Sears has a summer sale on a pair of Diamond earrings that would go great with the dress I bought last week."
"I'll amend that. Spend to your hearts content on things that won't put me out of a house."
She laughed ad threw open the hatch door. "Like that could ever happen. We're here. Swing low."
"Sweet Chariot." Tony chimed before she jumped out onto the roof of the store. Steve closed the hatch and looked back at the boy. He hadn't moved. He went and sat down.
"You okay, Cap?"
"Yeah, just a little overwhelmed."
"Hank'll fix it. He a smart man and if he's got Reed Richards working with him then nothing can stop them."
"I hope not. Did you hear what Natasha said about his parents?"
"Yeah. We'll have to be careful what we tell him."
Steve nodded and sighed. This was going to be one heck of an adventure.
This is a 20 chapter story that has been completed! Since I'm not entirely sure of a good way to post multichapter fics to tumblr I've posted a link to the whole story on Archive of Our Own. I hope you enjoyed reading!
2 notes · View notes
oceannahain · 2 years
Text
Annotation - CFB
This dossier is a continued exploration of the artists, theories, and cultural contexts that have guided my practice throughout the last two semesters within Critical Frameworks A and B. There is an ongoing theme of figuration throughout this blog, but in the past few months I have begun to focus on what role the body plays, how its appearance changes social contexts, and what it means to depart from the body entirely. My current interest lies within themes of the fragmented body, the abject quality of transition, and the limitless potentials of the body malleable.
A fundamental shift in my practice has been the move from depicting the idealized future outcomes of the body to showing it in a transitional, in flux state of mutation. Evidence of this change appears throughout the dossier, moving from artists like Agi Haines and Joanna Grochowska earlier in this semester to people like Asger Carlsen and Doreen Garner in the latter half. Haines and Grochowska’s work visualizes the idealized outcomes of the future body, produced in a lab-like setting.This scientific approach of depicting the “perfect specimen” lacked the story of how they got there, distancing it from the grotesque, active throes of mutations happening within my work. 
This shift to viewing the body as an active, malleable material in my own practice enticed a desire to find artists that were reducing bodies down to their truest form of materiality, flesh. Carlsen fragments and abstracts the body in a way that allows it to be contextualized as both human and sculpture, exposing its materiality without entirely departing from reality. While Garner’s work uses a similar approach of reduction, she exaggerates the grotesque nature of these forms to create a platform to expose and navigate the boundaries and constructs we apply to the body. 
Another theme that has found its way to the forefront this semester is the role of the artist and what happens when the line between artist and material is blurred. This has offered a discussion of agency in relation to the artist, the body, and the material that is navigated in the works of Carolee Schneeman, Archie Barry, and Philip Brophy. Schneeman offers her body as the work itself, with no differentiation between artist and material. In contrast, Barry creates two separate entities out of the same body, one being the artist as a conductor and the other, his arm, becoming the living being that is the performance. Brophy removes himself from the work almost entirely, taking the role of coordinator or curator while passing full agency to the viewer. As someone that uses my own body as the content of my work, these three artists have become integral on how I navigate my own role in the process and product of my practice. 
Although my own practice remains in the painting medium, much of this dossier investigates artists working within other mediums, more specifically, sculptural, performance, and photographic works. I find looking outside of my medium to be highly informative and it offers me the opportunity to recognize universal themes conducted through many modes of working. 
7 notes · View notes
ask-scarl-n-viol · 1 year
Note
"Hm? Oh its nothing, Turo and I simply haven't spoken in sometime. How is the Arven of your realities? And did the professors mention their thoughts on realities meeting?" (@ask-directorsada-and-friends
He's alright. He comes by to help us out with research from time to time but his main focus is becoming a chef.
And he's reallly good at it
He always comes up with the best recipes you'd never thought would taste so good!
Agi!
Gii~!
And their a hit with these two as well.
As far as our realities though nothing's concret yet. The profs are still workshopping ideas as to what could be causing this but they're hard at work trying to figure something out.
Tumblr media
And speaking of realites and Arven...
You caught that huh? Well I suppose we should tell you, I mean we’ve told you in our realities So I guess it’s alright.
So we were heading down to the lab for our routine visit; when we got there, the profs were yelling out for help, and there were sounds of pokemon fighting!
We rush over only to find another Koraidon/Miraidon fighting the other paradox pokemon! The professors were trying to calm them down, but they got attacked in the process.
So being the sensible, brave badasses we are, we sent our lizards to stop them. We fought long and hard, even going as far as to terrastalize!
Sadly, they weren't at full strength then, so we had a tough time...
Our backs were against the wall! Our assailants were close to finishing the four off... But then our Tera Orbs started glowing again, a scarlet glow for me...
And violet for me...
We raised the glowing orbs high, and they charged up again!
But unlike the other times, this was different..it felt like we were being charged with terra energy ourselves!
Meanwhile, while we were getting our second wind, the lizards seemed to have gotten theirs
Miraidon and Koraidon didn't want to back down just as much as we did, so much so that they managed to regain their battle forms.
We didn't notice it at first until after we tossed our terra orbs at them
Thank god we managed to get that boost, or else we wouldn't be around to tell the tale.
Agreed. After the fight, we managed to coral the fallen lizard back to their periods and took a MASSIVE break; I think I was asleep for li-
Hey Juliana!
Huh?
Suddenly Nemona ran up to Juliana smiling
Nemo?! What are you doing here!?
Our battle session, obvi! Thought I come to meet you, I wanna talk about some stuff as we walk.
Oh uh sure! Gimme a sec.. sorry Florian gotta jet!
All good had to wrap up anyway, tell other Nemona I said hi. Oh! And don't forget to write your report for Sada!
Right! See ya!
@ask-professor-arven
Juliana and Florian are unavailable for asks
6 notes · View notes
fipindustries · 2 years
Text
picture in your head: Zed having really intense arguments with Ray sunshine about AI risk foom, ray is convinced it will never happen, zed, who checks the prediction markets daily, saying it will in less than ten years.
actually thinking about it, what would the otherverse make of AI. say a lab is about to finally develop true AGI and the aurum takes a look into it, what then? does it become an ex machina? does the aurum decide to force it to the solomon seal just in case? would it then be considered an other? a knotted person?
3 notes · View notes
jcmarchi · 13 days
Text
DeepMind’s AI-First Science Quest Continues with AlphaFold 3
New Post has been published on https://thedigitalinsider.com/deepminds-ai-first-science-quest-continues-with-alphafold-3/
DeepMind’s AI-First Science Quest Continues with AlphaFold 3
The new model can predict the structure of many life’s molecules such as proteins, DNA, RNA and several others.
Created Using Ideogram
Next Week in The Sequence:
Edge 395: We dive into task-decomposition for autonomous agents. Review Google’s ReAct( Reason + Action) paper and the Bazed framework for building agents in TypeScript.
Edge 396: With all the noise about Apple’s AI strategy, we dive into some of their recent research in Ferret-UI.
You can subscribed to The Sequence below:
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
📝 Editorial: DeepMind’s AI-First Science Quest Continues with AlphaFold 3
AI for science is one of my favorite forms of AI 😊. You can argue that the real test for AGI is when it’s able to create new science. No company in the world has been pushing the boundaries of AI for science like Google DeepMind. Among the scientific achievements of DeepMind models, none has achieved more notoriety than AlphaFold, the model that was able to predict protein structures from a sequence of amino acids. A few days ago, DeepMind published details about AlphaFold 3, which expands its prediction capabilities beyond just proteins to a broad spectrum of biomolecules.
Our understanding of life’s molecules is a core foundation of our understanding of biological life and certainly the cornerstone of drug discovery. AlphaFold 3 is able to predict the structure of large molecular structures such as proteins, RNA, DNA, or even small molecules such as ligands. Even more impressive is the fact that AlphaFold 3 can model the chemical interactions in those molecules which effectively control cell functioning. Starting with a list of molecules, AlphaFold 3 is able to generate a 3D structure that clearly visualizes its joint 3D structure, revealing its intricacies and interactions.
Science is nothing if not in the hands of scientists. One of the coolest things about AlphaFold 3 was the release of the AlphaFold Server, a free search tool that scientists can use to test new hypotheses and model biomolecular structures. AlphaFold 3 continues Google DeepMind’s incredible scientific achievements in areas such as mathematics, physics, biology, and several others. The AlphaFold 3 paper is absolutely fascinating and a recommended read.
🔥 How to Stop Hallucinations
Introducing Galileo Protect – a revolutionary GenAI firewall that intercepts hallucinations, prompt attacks, security threats, and more!
See Protect live in action, and learn how it works, what makes it the first-of-its-kind, and ways to stop hallucinations in real-time.
🔎 ML Research
AlphaFold 3
Google DeepMind published a paper detailing AlphaFold 3, the new version of its groundbreaking bioscience model that was able to predict protein structures. The new version builds on its predecessor to predict the structure of other molecules such as DNA, RNA, ligands and more —> Read more.
Prometheus 2
Researchers from several labs such as Carnegie Mellon University and Allen AI published a paper proposing Prometheus 2, an LLM specialized in evaluating other LLMs. Prometheus builds on its predecessor and achieve results that closely mirror GPT-4 and human judgements —> Read more.
LoftQ
Microsoft Research published a paper proposing LoftQ, a technique that streamlines the fine-tuning process in LLMs. LoftQ combines ideas from methods such as LoRA or QLora with techniques like quantization and adaptive initialization to build a highly optimal fine-tuning process —> Read more.
LoRA LAnd
AI researchers from Predibase published a paper outlining LoRA Land, a group of 310 fine-tuned LLMs that rival GPT-4 across different domains. LoRA Land provides adapters a combination of 10 based models fine-tuned for 31 different tasks —> Read more.
NeMo Aligner
NVIDIA Research published a paper introducing NeMo-Aligner, a toolkit for model alignment that scales to hundreds of GPUs for training. NeMo-Aligner combines techniques such as Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), SteerLM, and Self-Play Fine-Tuning (SPIN) in a single toolkit —> Read more.
Google and DeepMind at ICLR
Google DeepMind researchers published a comprehensive summary of some of the papers submitted to the ICLR 2024 conference. The papers cover categories such as agents, frontier computer vision and foundational learning —> Read more.
🤖 Cool AI Tech Releases
Bedrock Studio
Amazon released Bedrock Studio, a web interface for developers to collaborate in the implementation of generative AI applications —> Read more.
Model Spec
OpenAI published Model Spec, a set of guidelines for shaping the desired behavior in foundation models —> Read more.
🛠 Real World AI
PayPal Cosmos.AI
Paypal shared some details about Cosmos.AI, its internal AI/ML infrastructure platform —> Read more.
Inside Einstein for Developers
Salesforce’s VP of Software Engineering shared some details about the development of the Einstein for Developers platform —> Read more.
📡AI Radar
Elon Musk’s xAI valuation in a new round seems to keep growing and its now rumored to be at $18 billion.
Microsoft is working on a new model called MAI-1 that could rival OpenAI.
Apple shared more details about its AI strategy with new AI chips.
Docusign acquired AI contract management company Lexion.
GPU platform RunPod raised $20 million from Intel Capital and Dell.
Data and AI governance platform Atlan raised $105 million.
Upend AI search engine emerged from stealth using over 100 LLMs.
Red Hat announced several new generative AI features for its OpenShift and Enterprise Linux platforms.
Panax raised $10 million for its AI cash management platform.
Samsung Medison, a global medical equipment company affiliated with Samsung Electronics, acquired AI ultrasound company Sonio.
AI compliance platform Fairly AI announced a $1.7 million pre-seed.
Sienna AI raised $4.7 million for building a customer support agent.
SoftBank Vision Fund is divesting of many of its investments as it shift focus to AI.
Neuralble raised $13 million to combine a brain computer interface in every day products.
Sagetap raised $6.8 million for an AI-powered marketplace for enterprise software.
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
0 notes
sunaleisocial · 18 days
Text
How AI might shape LGBTQIA+ advocacy
New Post has been published on https://sunalei.org/news/how-ai-might-shape-lgbtqia-advocacy/
How AI might shape LGBTQIA+ advocacy
Tumblr media
“AI Comes Out of the Closet” is a large learning model (LLM)-based online system that leverages artificial intelligence-generated dialog and virtual characters to create complex social interaction simulations. These simulations allow users to experiment with and refine their approach to LGBTQIA+ advocacy in a safe and controlled environment.
The research is both personal and political to lead author D. Pillis, an MIT graduate student in media arts and sciences and research scientist in the Tangible Media group of the MIT Media Lab, as it is rooted in a landscape where LGBTQIA+ people continue to navigate the complexities of identity, acceptance, and visibility. Pillis’s work is driven by the need for advocacy simulations that not only address the current challenges faced by the LGBTQIA+ community, but also offer innovative solutions that leverage the potential of AI to build understanding, empathy, and support. This project is meant to test the belief that technology, when thoughtfully applied, can be a force for societal good, bridging gaps between diverse experiences and fostering a more inclusive world.
Pillis highlights the significant, yet often overlooked, connection between the LGBTQIA+ community and the development of AI and computing. He says, “AI has always been queer. Computing has always been queer,” drawing attention to the contributions of queer individuals in this field, beginning with the story of Alan Turing, a founding figure in computer science and AI, who faced legal punishment — chemical castration — for his homosexuality. Contrasting Turing’s experience with the present, Pillis notes the acceptance of OpenAI CEO Sam Altman’s openness about his queer identity, illustrating a broader shift toward inclusivity. This evolution from Turing to Altman highlights the influence of LGBTQIA+ individuals in shaping the field of AI.
“There’s something about queer culture that celebrates the artificial through kitsch, camp, and performance,” states Pillis. AI itself embodies the constructed, the performative — qualities deeply resonant with queer experience and expression. Through this lens, he argues for a recognition of the queerness at the heart of AI, not just in its history but in its very essence. 
Pillis found a collaborator with Pat Pataranutaporn, a graduate student in the Media Lab’s Fluid Interfaces group. As is often the case at the Media Lab, their partnership began amid the lab’s culture of interdisciplinary exploration, where Pataranutaporn’s work on AI characters met Pillis’s focus on 3D human simulation.
Taking on the challenge of interpreting text to gesture-based relationships was a significant technological hurdle. In Pataranutaporn’s research, he emphasizes creating conditions where people can thrive, not just fix issues, aiming to understand how AI can contribute to human flourishing across dimensions of “wisdom, wonder, and well-being.” In this project, Pataranutaporn focused on generating the dialogues that drove the virtual interactions. “It’s not just about making people more effective, or more efficient, or more productive. It’s about how you can support  multi-dimensional aspects of human growth and development.” 
Pattie Maes, the Germeshausen Professor of Media Arts and Sciences at the MIT Media Lab and advisor to this project, states, “AI offers tremendous new opportunities for supporting human learning, empowerment, and self development. I am proud and excited that this work pushes for AI technologies that benefit and enable people and humanity, rather than aiming for AGI [artificial general intelligence].”
Addressing urgent workplace concerns
The urgency of this project is underscored by findings that nearly 46 percent of LGBTQIA+ workers have experienced some form of unfair treatment at work — from being overlooked for employment opportunities to experiencing harassment. Approximately 46 percent of LGBTQIA+ individuals feel compelled to conceal their identity at work due to concerns about stereotyping, potentially making colleagues uncomfortable, or jeopardizing professional relationships.
The tech industry, in particular, presents a challenging landscape for LGBTQIA+ individuals. Data indicate that 33 percent of gay engineers perceive their sexual orientation as a barrier to career advancement. And over half of LGBTQIA+ workers report encountering homophobic jokes in the workplace, highlighting the need for cultural and behavioral change.
“AI Comes Out of the Closet” is designed as an online study to assess the simulator’s impact on fostering empathy, understanding, and advocacy skills toward LGBTQIA+ issues. Participants were introduced to an AI-generated environment, simulating real-world scenarios that LGBTQIA+ individuals might face, particularly focusing on the dynamics of coming out in the workplace.
Engaging with the simulation
Participants were randomly assigned to one of two interaction modes with the virtual characters: “First Person” or “Third Person.” The First Person mode placed participants in the shoes of a character navigating the coming-out process, creating a personal engagement with the simulation. The Third Person mode allowed participants to assume the role of an observer or director, influencing the storyline from an external vantage point, similar to the interactive audience in Forum Theater. This approach was designed to explore the impacts of immersive versus observational experiences.
Participants were guided through a series of simulated interactions, where virtual characters, powered by advanced AI and LLMs, presented realistic and dynamic responses to the participants’ inputs. The scenarios included key moments and decisions, portraying the emotional and social complexities of coming out.
The study’s scripted scenarios provided a structure for the AI’s interactions with participants. For example, in a scenario, a virtual character might disclose their LGBTQIA+ identity to a co-worker (represented by the participant), who then navigates the conversation with multiple choice responses. These choices are designed to portray a range of reactions, from supportive to neutral or even dismissive, allowing the study to capture a spectrum of participant attitudes and responses.
Following the simulation, participants were asked a series of questions aimed at gauging their levels of empathy, sympathy, and comfort with LGBTQIA+ advocacy. These questions aimed to reflect and predict how the simulation could change participants’ future behavior and thoughts in real situations.
The results
The study found an interesting difference in how the simulation affected empathy levels based on Third Person or First Person mode. In the Third Person mode, where participants watched and guided the action from outside, the study shows that participants felt more empathy and understanding toward LGBTQIA+ people in “coming out” situations. This suggests that watching and controlling the scenario helped them better relate to the experiences of LGBTQIA+ individuals.
However, the First Person mode, where participants acted as a character in the simulation, didn’t significantly change their empathy or ability to support others. This difference shows that the perspective we take might influence our reactions to simulated social situations, and being an observer might be better for increasing empathy.
While the increase in empathy and sympathy within the Third Person group was statistically significant, the study also uncovered areas that require further investigation. The impact of the simulation on participants’ comfort and confidence in LGBTQIA+ advocacy situations, for instance, presented mixed results, indicating a need for deeper examination.
Also, the research acknowledges limitations inherent in its methodology, including reliance on self-reported data and the controlled nature of the simulation scenarios. These factors, while necessary for the study’s initial exploration, suggest areas of future research to validate and expand upon the findings. The exploration of additional scenarios, diverse participant demographics, and longitudinal studies to assess the lasting impact of the simulation could be undertaken in future work.
“The most compelling surprise was how many people were both accepting and dismissive of LGBTQIA+ interactions at work,” says Pillis. This attitude highlights a wider trend where people might accept LGBTQIA+ individuals but still not fully recognize the importance of their experiences.
Potential real-world applications
Pillis envisions multiple opportunities for simulations like the one built for his research. 
In human resources and corporate training, the simulator could serve as a tool for fostering inclusive workplaces. By enabling employees to explore and understand the nuances of LGBTQIA+ experiences and advocacy, companies could cultivate more empathetic and supportive work environments, enhancing team cohesion and employee satisfaction.
For educators, the tool could offer a new approach to teaching empathy and social justice, integrating it into curricula to prepare students for the diverse world they live in. For parents, especially those of LGBTQIA+ children, the simulator could provide important insights and strategies for supporting their children through their coming-out processes and beyond.
Health care professionals could also benefit from training with the simulator, gaining a deeper understanding of LGBTQIA+ patient experiences to improve care and relationships. Mental health services, in particular, could use the tool to train therapists and counselors in providing more effective support for LGBTQIA+ clients.
In addition to Maes, Pillis and Pataranutaporn were joined by Misha Sra of the University of California at Santa Barbara on the study. 
0 notes
takahashicleaning · 2 months
Text
TEDにて
シンシア・ブリジール:パーソナル・ロボットの台頭
(詳しくご覧になりたい場合は上記リンクからどうぞ)
注意事項として、基礎技術にリープフロッグは存在しません。応用分野のみです!
注意事項として、基礎技術にリープフロッグは存在しません。応用分野のみです!
注意事項として、基礎技術にリープフロッグは存在しません。応用分野のみです!
ショージルーカス監督のスターウォーズを見て、大学院生の頃にシンシア・ブリジールはなぜ?火星ではロボットを使っているのに私たちのリビングルームにはいないのか考えました。
彼女が気づいた鍵は、人々と交流できるようにロボットを学習させようとするものでした。
今、彼女は、そのような考えのもとで、人に教えたり、学習し、一緒に遊べるようなロボットを作っています。
子供達のための新しいインタラクティブ・ゲームの驚くべきデモをご覧ください。
2014年には、ソーシャルロボット開発でJIBOというロボットを開発開始しています。
ソーシャルロボット工学のパイオニアでマサチューセッツ工科大学メディアラボ(MIT Media Lab)でメディアアーツとサイエンスを教えるシンシア・ブリージール准教授です。
MITの大学院で人工知能を学び、パーソナルロボットグループ(Personal Robots Group)を設立しています。
(個人的なアイデア)
これからの未来のヴィジョンとしての大前提は・・・
チャットGPTなどのAGIは、人工知能時代には、セレンディピティ的な人生を良くしてくれるメッセージを伝えてくれることの他に貨幣を事前分配、再分配して生活を下支えする役割に徹するべき。
例えば、GAFAMのようにアカウントに本人以外がアクセスしたら自動的にお知らせしてくれる方向性は良いサポートです。
2014年には、ソーシャルロボット開発でJIBOというロボットを開発していました。個人的には期待してましたが、2018年に資金難で開発を一時停止しています。
ウォルター・デ・ブラウワーも言うように、2023年10月にはLLM(大規模言語モデル)は、マルチモーダルになった。マルチモーダルとは、脳の機能で言うと右脳と左脳を連結する脳梁(のうりょう)に近いこと。
Disneyとも権利譲渡の許可をもらい、ニューラルエンジンも搭載されたAppleシリコンが誕生した2024年現在でAppleが、すべて買い取って再発明してくれれば・・・
パスキーでセキュリティとプライバシーを両立させ、Apple Vision Proとシナジーさせつつ、「ホーム」標準アプリを経由してジェスチャーなどでもシームレスに連携できるとソーシャルロボット製品の革命が起こせる可能性は高い。
PTSDからの回復にもアートが有効だし、瞑想やレジリエンスにも活用できそう。
そして
「Appleでサインイン」もソーシャルサインイン(ソーシャルログイン)方式です。
「Appleでサインイン」に切り替える方法
Facebook、Google、Twitter、Lineのアカウント(日本他企業含む)を使って、ワンクリックでサインインできるようになる画面がよく登場します。
このソーシャルサインイン(ソーシャルログイン)方式にAppleが非常に魅力的な提案を2019の秋からしています。
Introducing Sign In with Apple - WWDC 2019 - Videos - Apple Developer
これはアプリなどからサインインする際に、ソーシャルメディアに登録しているアカウントの情報を自動的にサードパーティのサイトやサービスに提供してしまうことをコントロールする方法です。
「Appleでサインイン」(Sign In with Apple)ボタンは、アプリへの実装が義務化されて数年かけて普及してます。2021年時点ですべてに適用済み。
こちらは、Apple IDに登録しているアカウント情報からサービス側に提供する形にしてします。
使い方の簡単な説明は以下から
まずソーシャルサインインボタンから「Appleでサインイン」を選ぶ。
次に、名前とメールアドレスを登録する。ここで「メールを非公開」を選ぶと、Apple ID内に登録してるメールアドレスを公開せず、転送用のアドレスがサービス側に登録される。
最後にApple IDのパスワードを入力して登録を完了する。
次回からワンクリックで「Appleで続ける」ボタンから再ログインできるようになる。
転送用のアドレスは「設定」→「Apple ID」→「パスワードとセキュリティ」→「Appleでサインイン」から確認可能です。
他のソーシャルメディアアカウント情報から切り替えると、万が一、漏洩してもメールアドレスは非公開で保護できます。
さらに
Appleは、プライバシー保護を目的とした「AppTrackingTransparency(ATT、Appのトラッキングの透明性)」を導入
高度なセキュリティーや高いプライバシーに投資を積極的に行います。
Appleはこれらの対策として提案した内容がこれ。
データミニマイゼーション!
取得する情報・できる情報を最小化する。データが取れなければ、守る必要も漏れる可能性もない!
オンデバイスでのインテリジェンス!
スマートフォンなど機器のなかで処理を完結させることでプライバシーにかかわる部分を端末内に留める。
クラウドにアップロードして、照会プロセスを最小化することで、漏洩や不適切な保存の可能性を排除する!
高い透明性とコントロール!
どんなデータを集め、送っているのか、どう使うのかを明示し、ユーザーが理解したうえで自身で選んだり変更できるようにする!
セキュリティプロテクション!
機器上などで、どうしても発生するデータに関しては指紋認証や顔認証などを使ったセキュリティ技術で、漏えいがないようにしっかりと守るセキュリティプロテクション!
機器上などで、どうしても発生するデータに関しては指紋認証や顔認証などを使ったセキュリティ技術で、漏えいがないようにしっかりと守る
202012のApp Storeプライバシー情報セクションは、3つ目「透明性とコントロール」の取り組み。
位置情報などは自己申告だが、アップルとユーザーを欺いて不適切な利用をしていることが分かればガイドラインと契約違反になり、App Storeからの削除や開発者登録の抹消もありえます。
このプライバシー情報の開示は12月8日から、iOS、iPadOS、macOS、tvOSなどOSを問わず、新アプリの審査時または更新時に提出が求められるようになっています。
続いて
iOSのメッセージングアプリ「iMessage」に量子暗号を用いた「PQ3」を導入する
と2024年3月に発表し、年内にも全世界に展開するかもしれません。
量子暗号の先端を走る日本が行政府に先行導入すればよかったが、さすがAppleです!!
マイナポータルは中身が行政府に読み取られ悪用される危険性が高い?
かもしれないので(利用規約にもしっかり書いてあります)慎重に様子を見ていましたが改善されるような発表は見えない。
改善案として申請や令状を取らないと本人以外はマイナポータルの中身を見ることができないとか・・・
中の人がアクセスした履歴を記録しておくとかなどの対応をAppleを見習って欲しいものです。
Appleによると
「PQ3」という量子コンピューター対応の暗号プロトコルにより、高度に洗練された量子攻撃にも耐えうるとのことです。
つまり、最先端の量子コンピューターでも解読できなくなります。
妥協弾力のある暗号化と高度に洗練された量子攻撃に対する広範な防御を備えた「PQ3」は
Appleが定義するレベル3セキュリティと呼ばれるものに到達する最初のメッセージングプロトコル
であり、他のすべての広く展開されているメッセージングアプリを上回るプロトコル保護を提供します。
私たちの知る限り「PQ3」は世界のあらゆる大規模なメッセージングプロトコルの中で最も強力なセキュリティ特性を持っています。
でも、量子コンピューターはまだ存在しないのに、なぜこのことが問題になるのか?
Appleは「前提は単純で、そのような攻撃者が暗号化されたデータを今のうちに大量に収集しておいて、将来、解読するために保管しておく可能性があるから」と説明している。
「今は、データを解読できなくても、解読できる量子コンピューターが将来手に入るまで保管しておくことはできる」
Appleは、今やりとりされているiMessageのやり取りを、将来のコンピューターや攻撃者、特に「Harvest Now, Decrypt Later」(今から収集しておき、後で復号する)と呼ばれる攻撃シナリオから守れるようにしようとしている。
このシナリオは、量子コンピューターなどのデータを解読できるだけの高度なデバイスが作られるまで、何年もデータを保管しておくというものだ。
量子コンピューターの近年の進歩から現実的に可能になり始めているためです。
Appleの説明では、メッセージングサービスの中でレベル3の「ポスト量子安全性」を持つものは2024年時点ではiMessageのみになります。
また「ポスト量子安全性」とは、将来、登場する量子コンピューターを使った暗号の解読にも耐えられることを意味します。
「PQ3」は、各デバイスがiPhoneデバイス内で生成し、iMessage登録の一環としてAppleサーバーに送信する公開鍵のセットに新しい量子暗号化キーも組み合わせて導入します。
この仕組みは、送信者デバイスは、受信者がオフラインであっても、受信者の公開鍵を取得して、最初の鍵確立時と鍵の再構築時の双方に量子暗号化キーが生成されるようになります。
「PQ3」は、2024年3月に公開されるiOS17.4、iPadOS17.4、macOS 14.4、watchOS10.4からiMessageで順次展開されていき、今年後半にはiMessageのすべての暗号プロトコルが置き換えられます。
最後に
背景として米国国立標準技術研究所(NIST: National Institute of Standards and Technology)は、既存の暗号を短時間に解読可能な量子コンピュータが実用化されると想定し
量子コンピュータでも解読困難な「耐量子計算機暗号(PQC)」の標準化を進めています。
<おすすめサイト>
iOSのメッセージングアプリ「iMessage」に量子暗号を用いた「PQ3」を導入すると発表!!
Apple Vision Pro 2023
ウォルター・デ・ブラウワー:AIが人間であることの意味をどのように学んでいるか?
キャシー・ウッド:AIが指数関数的な経済成長を引き起こす理由
ルーシー・ホーン:レジリエンス(心の回復力)を高めるための3つの秘訣
メリッサ・ウォーカー:アートはPTSDの見えない傷を癒せる
ナディン・バーク・ハリス:いかに子供時代のトラウマが生涯に渡る健康に影響を与えるのか
JIBO, The World’s First Family Robot.
ヴィクラム・シャーマ: 量子物理学はどのようにして暗号強化するのか?
NVIDIA Jetson Partner Stories: Jibo Makes Its Robot More Social with the Jetson Platform
<提供>
東京都北区神谷の高橋クリーニングプレゼント
独自サービス展開中!服の高橋クリーニング店は職人による手仕上げ。お手頃50ですよ。往復送料、曲Song購入可。詳細は、今すぐ電話。東京都内限定。北部、東部、渋谷区周囲。地元周辺区もOKです
東京都北区神谷高橋クリーニング店Facebook版
0 notes
financialinvests · 2 months
Link
0 notes
cadmar · 2 months
Text
Unshakeable
Once you understand and grasp how the brain automatically self-organizes itself after receiving all these sensory signals, then the next level will reveal itself. Patterns of waves, with each wave with different frequencies and force. Each object has its own sensory frequency that distinguishes itself from other objects, environments, and experiences. These then become our mental images. Our mental images of objects, environments, situations, and our reactive emotional experiences are stored and become our memories.
With artificial intelligence, the latest leak from a computer lab is that the next level, leading towards a true AGI (artificial general intelligence) that mimics human thinking, is known as Q*. Q* is an energy-based model. Currently, the AI is a large language model that can only predict the next letter, and the next word in a sentence. It has no "understanding" what it is doing and its outputs. It is only following a complex program of steps within a tree of billions of branches, a chain of many steps.
Q* is attempting to do this analogy of water overflowing from the river and the pattern on the river's bank depends on the amount of water that is overflowing, the force of the water flow, and the medium of the river's bank (sand, gravel, rocky, combination, etc.) and the variety of patterns is something like 10 to the exponential power of 60, that is 10 times 10 and you do that 60 times! Thus, the first level of our brain receiving sensory signals and self-organizing these signals.
Unshakeable happens when one is at the next level as each neural pattern changes with each incoming sensory signal. Your memories are rigid and tries to freeze these patterns, but this only makes them more fragile and shakeable! Beliefs, values, words, ideas, thoughts, morality, and desires are all created from sensory signals and are shakeable. Desires are emotional reactions created from your memories of previous reactions, experiences, and outcomes. Sensory signals are shakeable, because they originate from shakeable objects which are always changing.
Unshakeable is direct connection to the unshakeable Source.
0 notes
7ooo-ru · 2 months
Photo
Tumblr media
Samsung запустил лабораторию для исследований ИИ следующего поколения
Samsung, в своем стремлении к развитию искусственного общего интеллекта (ИИ) объявил о запуске Samsung Semiconductor AGI Computing Lab. Эти лаборатории, расположенные в США и Южной Корее, будут специализироваться на разработке нового поколения полупроводников, предназначенных для поддержки будущих ИИ с возможностями, превосходящими или равными возможностям человека.
Подробнее https://7ooo.ru/group/2024/03/20/935-samsung-zapustil-laboratoriyu-dlya-issledovaniyii-sleduyuschego-pokoleniya-grss-291888226.html
0 notes