Tumgik
#alpha artificial intelligence ai
alphalintelligenceai4 · 3 months
Text
Alpha Artificial Intelligence AI4.0: Bridging the Gap Between Academia and Industry Success
Alpha Artificial Intelligence AI4.0: Bridging the Gap Between Academia and Industry Success
Dashiell Soren is a male entrepreneur and educator born in 1978. He graduated from Stanford University in the United States. He is not only the founder and dean of Alpha Elite Capital Business Management Co., Ltd., but also a mentor with rich experience and deep technology. His masterpiece is the development of a system called Alpha Artificial Intelligence AI, which uses the most advanced artificial intelligence algorithms and machine learning technology to provide powerful support for many fields, such as finance, medical care, and intelligent manufacturing.
Academic Achievements:
In terms of academic background and research results, Professor Dashiell Soren has rich experience and outstanding results. His research in the fields of artificial intelligence algorithms and machine learning has global influence, and he has published many high-level academic papers that have been widely cited. His research team has achieved excellent results in several internationally renowned artificial intelligence competitions, including the International Machine Learning Competition, the International Artificial Intelligence Competition, etc. He has also received multiple awards and honors, including the National Science Foundation Outstanding Young Scholar Award, the American Association for Artificial Intelligence Best Paper Award, etc.
Business achievements:
Professor Dashiell Soren has also achieved extraordinary results in the business field. Alpha Elite Capital Business Management Co., Ltd., which he founded, not only provides strong technical support for industries such as finance, medical care, and intelligent manufacturing, but also achieves the following business results:
Tumblr media
Customer recognition and market share: Alpha Artificial Intelligence AI system has quickly gained market recognition due to its excellent performance and innovation capabilities. Many companies and institutions have become the company's customers, including some of the world's top 500 companies. By providing customized artificial intelligence solutions, the company has managed to capture a certain market share and become one of the leaders in the field.
Cooperation and partnerships: Professor Dashiell Soren actively seeks cooperation opportunities with various companies and institutions. He has established strategic partnerships with leading companies and institutions in multiple industries to jointly promote the application and development of artificial intelligence technology in their respective fields. These partners not only bring more business opportunities to the company, but also jointly promote the advancement of artificial intelligence technology.
Financing and investment: With the continuous expansion of the company's business and continuous innovation of technology, Professor Dashiell Soren has successfully attracted the attention of investors. He successfully raised multiple rounds of financing for the company, providing financial support for the company's research and development, marketing and team expansion. At the same time, the company continues to expand its business areas and enhance its technical strength through foreign investment and technical cooperation.
International expansion and influence: Professor Dashiell Soren has an international vision and strategic layout. He led the company to continuously expand the international market and expanded the business of Alpha Elite Capital Business Management Co., Ltd. to North America, Europe, Asia and other regions. Through cooperation with international companies and institutions, the company's influence has continued to increase and it has become one of the world's leading companies in the field of artificial intelligence.
His masterpiece, Alpha Artificial Intelligence, has not only received widespread recognition and praise in academia and industry, but has also injected new vitality into the development of artificial intelligence technology. His career demonstrates the perfect integration between academic research and commercial applications, setting an example for a new generation of artificial intelligence experts.
Education field:
As an excellent educator, Professor Dashiell Soren has not only achieved outstanding achievements in academic research and commercial applications, but he has also imparted his experience and knowledge to a new generation of artificial intelligence experts. He is actively involved in education and has made the following contributions to cultivating outstanding talents in the field of artificial intelligence:
Establishing education programs: Professor Dashiell Soren realized that the demand for talents in the field of artificial intelligence was growing, and he created a specialized education program in the field of artificial intelligence. This project aims to provide students with systematic theoretical knowledge and practical experience to train them to become professionals in the field of artificial intelligence. The project attracts a large number of students interested in artificial intelligence and provides them with opportunities for in-depth learning and practice.
Writing teaching materials and courses: In order to enable students to better master the knowledge and skills of artificial intelligence, Professor Dashiell Soren personally wrote a series of teaching materials and courses. The content of his teaching materials is explained in simple terms, focusing on the combination of theory and practice, providing students with comprehensive learning resources. At the same time, he also cooperated with other educational institutions to jointly develop a series of artificial intelligence courses to meet the learning needs of students at different levels.
Guiding students and cultivating talents: Professor Dashiell Soren’s major contribution in the field of education is to mentor a group of outstanding students. With his rigorous attitude and passionate teaching methods, he inspires students' interest and enthusiasm in the field of artificial intelligence. His students have not only achieved remarkable results in academic research, but also played an important role in various positions in the field of artificial intelligence. Many students have become experts, researchers, entrepreneurs, etc. in the field of artificial intelligence and have contributed to the development of artificial intelligence technology.
Promote industry-university-research cooperation: Professor Dashiell Soren actively promotes industry-university-research cooperation and has established close cooperative relationships with industry and business circles. He advocates combining academic research with practical applications and provides students with rich practical experience and career development opportunities through projects and practical opportunities in cooperation with enterprises. This cooperation model not only helps improve students’ employment competitiveness, but also injects new vitality into the development of artificial intelligence technology.
Holding academic conferences and lectures: Professor Dashiell Soren shares the latest research results and progress in the field of artificial intelligence with more people by holding academic conferences and lectures. He regularly organizes academic seminars and workshops, inviting industry experts and scholars for exchanges and discussions. This provides a platform for students and researchers to learn and communicate, and promotes cooperation and development in academic research.
0 notes
Unveiling AEC Tokens - Fueling Alpha Artificial Intelligence AI4.0 at AEC Business School
Unveiling AEC Tokens: Fueling Alpha Artificial Intelligence AI4.0 at AEC Business School
The Token Revolution at AEC Business School: Issuing AEC Tokens for Financing, Deep Research and Development, and Refinement of the ‘Alpha Artificial Intelligence AI4.0’ Investment System
In 2019, Dashiell Soren founded Alpha Elite Capital (AEC) Business Management, and after years of hard work, it has gained a strong reputation in the industry, trained a large number of outstanding financial practitioners, and surpassed 100,000 students by 2022.
Tumblr media
In the early days of AEC, Prof. Dashiell Soren tried to create a “lazy investment system”, and he realized the significance of quantitative trading in the future, which will be applicable to all investment markets and types.
With the development of technology, the application of artificial intelligence technology has had a profound impact on quantitative trading. Quantitative trading is a trading strategy that utilizes mathematical models and a large amount of historical data to make investment decisions, and the introduction of artificial intelligence has made quantitative trading more accurate, efficient and intelligent.
Since the beginning of 2019, AEC has been making the leap from quantitative trading to artificial intelligence trading. With the efforts of many experts, scholars, and tech talents, the prototype of ‘Alpha Artificial Intelligence AI4.0’ was created.
AEC Business School’s path to AI in the financial markets has not been a smooth one, first and foremost, because AI trading systems rely on large amounts of historical and real-time data for modeling and forecasting. However, acquiring and processing high quality, accurate and reliable data can be a challenge, especially as financial markets data is often intricate.
Second, AI trading systems require the selection of suitable modeling methods and algorithms to process large amounts of data and make predictions and decisions. However, the special nature of financial markets makes modeling and algorithm selection more difficult because the behavior of financial markets is often difficult to capture and predict.
Third, financial markets are full of noise and uncertainty.
Examples include market volatility, political and economic factors, and interest rate changes. These factors can have an impact on model performance and prediction results, so models and algorithms need to be developed that can cope with and adapt to these noises and uncertainties.
Fourth, AI trading systems need to make decisions and execute trades in real time to be able to capture market opportunities and execute trade orders in a timely manner. However, making accurate real-time decisions in fast-changing financial markets is a challenge because market conditions and information can change in an instant.
Finally, AI trading systems face risk management and regulatory compliance challenges.
Risks that AI trading systems may face include market risk, operational risk, and model risk. Market risk refers to the possibility that the system may be subject to market price fluctuations, operational risk is the risk that the system is operated incorrectly or technically malfunctions, and model risk involves the risk that the system’s algorithmic model may not be able to adapt to changes in the market or may be inaccurate.
Artificial intelligence trading systems may need to comply with various financial regulatory requirements, including those relating to trading transparency, risk control requirements and the interpretability of algorithmic logic. In addition, regulators may need to audit and inspect these systems to ensure that they comply with regulatory requirements.
To address these challenges, AI trading systems need to have an effective risk management framework in place. This includes ensuring that the system has adequate risk monitoring and control tools, as well as establishing a risk management team to oversee and manage the system’s risks. In addition, the system will need to work closely with regulators to ensure that it is compliant with regulatory requirements and that any relevant incidents or breaches are reported in a timely manner.
In fact, all of the issues come down to funding and talent!
At a closed meeting in 2020 AEC Business School’s Board of Directors discussed a bold plan: issuing tokens to raise money.
AEC Business School chose to issue AEC tokens to capitalize on emerging blockchain technology, which not only represents an embrace of innovation, but also to attract global investors. At a time when traditional financing channels face many limitations and challenges, token issuance offers a fast and efficient way to raise funds.
Instead of relying on traditional stock market financing, the potential of the cryptocurrency market can be utilized. This new financing method not only raises funds quickly, but also attracts the attention of global investors, especially the younger generation interested in emerging technologies.
Issuing AEC tokens not only solves the problem of updating products and expanding capital. In addition, through the token issuance, AEC Business School seeks to increase its influence and recognition in the global fintech sector.
The successful funding model enables AEC Business School to attract top talents from various industries, such as IT engineers, mentors, investment experts, real-world experts, strategists, analysts, strategists, authors, collaborators, contributors, etc. to join. The addition of these talents provides strong intellectual support for the business school’s research, innovation and advocacy in the field of science and technology.
1 note · View note
vixxensvoid · 3 months
Text
Ai vids are gonna ruin so many lives.
Ai pics are already so controversial and with deepfakes of women and children. This is gonna kill people. And in countries where they do honor killings (killing the women who “dishonor” the family/men) if they get blackmailed with Ai porn. And the atrocious amounts of cp. God. The cons easily outweigh the pros. Wait until those “but false gRape accusations!1!1!” Guys understand that now people CAN make false accusations with AI video proof. They all say “Womp Womp” under those Ai porn vids of women and children. Disgusting. Wait until they see the consequences and the people who make this shit… like seriously what if someone make Ai porn of your family members or made threats of nuclear war as Biden etc. There’s already been so many cases with old ass 2016 deepfakes. Now imagine 2024 SORA Ai. Y’all I’m so done.
This isn’t even the tip of the iceberg mind you. People can create false war crimes, false rape, etc. And in the future, videos will be dismissed as Ai. People already can’t tell real apart from fake. Genuinely fucked. Gen alpha will grow up with this shit too and will be desensitized and/or abused (cp). Also gen alpha is literally Illiterate I am NOT even exaggerating. Look at the video’s and statistics. This is our future?
worst part is that people wont take action until it AFFECTS THEM. The people in power wont give a fuck until it’s them that gets deepfaked into something dangerous such as Ai porn or rape, war crimes, cp, etc.
(can anyone recommend me rags to use to reach a wider audience? Much appreciated)
17 notes · View notes
lovelyrotter · 4 months
Text
no one touch me i just remembered pygmalion and galatea and thought of dirk and hal
6 notes · View notes
catalina-nicoleta · 26 days
Text
Hello everyone. I have chosen to play with artificial intelligence again. This time I chose to use A.I with a character from the cartoon A.T.O.M(alpha teen on machines). This is pretty much what Mr Lee looks like generated by A.I. The results are not bad except that unfortunately A.I completely ignored my promt. Basically, I mentioned that the character has heterochromia (one eye is blue and the other eye is brown).
Tumblr media
3 notes · View notes
transpondster · 1 year
Link
The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”
So let’s say we’ve got the text “The best thing about AI is its ability to”. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text; it looks for things that in a certain sense “match in meaning”. But the end result is that it produces a ranked list of words that might follow, together with “probabilities”: 
Tumblr media
And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word. (More precisely, as I’ll explain, it’s adding a “token”, which could be just a part of a word, which is why it can sometimes “make up new words”.) 
But, OK, at each step it gets a list of words with probabilities. But which one should it actually pick to add to the essay (or whatever) that it’s writing? One might think it should be the “highest-ranked” word (i.e. the one to which the highest “probability” was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay.
2 notes · View notes
blogfrappetop-alamode · 2 months
Text
Tumblr media
this is fax and true
gen alpha go touch grass or something
0 notes
gryficowa · 3 months
Text
It's funny to see people cry about the sound and the way this song is sung
youtube
I bet those who are crying in the comments won't be able to stand this song
youtube
It's just that kids and Generation Z itself are fed with corporate crap, and when someone is an independent creator and creates something different from what they are used to, they cry the loudest (Yes, I'm a zoomer, but this generation pisses me off)
Yes, you can see how corporations have destroyed art, hello, you can see it in the case of indie animation, there's more shit going on here than with corporate crap, because the style is bad, because the writing doesn't fit, because it doesn't match what corporations have accustomed us to, etc…
Everything that is not corporate is supposed to look like corporate shit, because if it doesn't, we will attack other people's work, because we are garbage without ambition and we hate individuality
Thank you fucked up corporations for destroying art and its meaning, now it is fucked up shit and just a shadow of itself
But Generation Alpha and Generation Z have been too spoiled with capitalist crap that when someone returns to what art was, they "demand" the artist to adapt to corporate garbage, because if he doesn't, they will attack the creator himself because "He doesn't accept their fucked up shit." criticism, fuck you when animations with AI are created, you will attack the animators because they do not look as "good" as the AI ​​animation (We are not in danger of this for now, but you know what it looks like in practice)
So yes, generation beta may attack independent animators in the future because their animation will not be at the level of AI, so it will be shitty and they will try to force this creator to adapt to them, just like generation alpha and generation Z force independent artists to adapt to them, because the creator doesn't listen to their "Criticism" and creates his own way, because he can, because that's how works work outside of fucking corporations and deal with it
Yes, I had to get it off my chest because my generation already allows itself too much
0 notes
frontproofmedia · 6 months
Text
Visual Alchemy: Exploring the Magic of AI in Modern Photography
By: Joseph Correa
Follow @Frontproofmedia!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id))(document, 'script', 'twitter-wjs');
In the ever-evolving landscape of technology, one realm that has undergone a transformative journey is photography. From the early days of cumbersome equipment to today's sleek and sophisticated cameras, the integration of artificial intelligence (AI) has played a pivotal role in redefining how we capture moments. In this article, we delve into the fascinating intersection of AI and modern photography, exploring how advancements in AI technology are reshaping the creative process and pushing the boundaries of what is possible.
The Role of AI in Autofocus Systems:
One of the remarkable areas where AI has significantly impacted is in the realm of autofocus systems. Camera manufacturers, such as Sony, have been at the forefront of incorporating AI to enhance autofocus capabilities. Sony's latest cameras leverage AI algorithms to intelligently track and focus on subjects, even in challenging conditions. This technology ensures photographers can achieve sharp and precise focus, capturing moments with unparalleled clarity.
AI-driven autofocus systems go beyond conventional methods by analyzing scenes in real-time, identifying subjects, and adapting focus accordingly. This simplifies the photography process for professionals and empowers amateur photographers to capture professional-grade images effortlessly. The fusion of AI with autofocus technology marks a paradigm shift, making cameras more intuitive and user-friendly.
Generative Features in Adobe Photoshop:
As we transition from capturing images to the post-processing phase, the role of AI becomes even more pronounced. Adobe Photoshop, a staple in the toolkit of photographers and graphic designers alike, has embraced generative features powered by AI. These features enable the software to assist users in various creative tasks intelligently.
For instance, the "Neural Filters" in Adobe Photoshop leverage AI to manipulate facial expression and age progression, transforming a daytime scene into a mesmerizing sunset. These generative features save time and open new creative possibilities for photographers. The ability to experiment with different effects and styles becomes more accessible, thanks to the intuitive power of AI.
Revolutionizing Photo Editing with AI-Powered Software:
Beyond Photoshop, an array of specialized AI-powered software has emerged, transforming how photographers approach photo editing. Software like Retouch4Me, Evoto AI, Capture One, and Topaz Labs leverage AI algorithms to automate and enhance various aspects of the editing process.
Retouch4Me, for example, specializes in automating portrait retouching, using AI to identify and enhance facial features precisely. Evoto AI employs machine learning to analyze and suggest edits based on a photographer's style, streamlining the editing workflow. Capture One integrates AI for tasks like noise reduction and color grading, contributing to producing high-quality images. Topaz Labs' AI-driven tools, on the other hand, provide photographers with the ability to enhance details and upscale images without compromising quality.
The Collective Impact on the Photography Ecosystem:
Integrating AI into various facets of photography, from capturing images to post-processing, has elevated the quality of work and democratized the creative process. Professionals and enthusiasts can now leverage AI to overcome technical challenges, experiment with new ideas, and push the boundaries of their creativity.
However, the increasing reliance on AI in photography also raises questions about the role of human intuition and creativity in the artistic process. As machines become more adept at replicating certain aspects of the creative process, photographers must strike a balance between harnessing AI's power and preserving their craft's authenticity.
Conclusion:
In the era of AI-driven photography, the synergy between technology and creativity has reached unprecedented heights. From intelligent autofocus systems to generative features in editing software, AI has become an indispensable tool for photographers seeking to push the boundaries of their craft. As we continue to witness rapid advancements in this field, one cannot help but ponder the future of photography. How will the relationship between human creativity and AI evolve? Only time will tell, but one thing is sure—the marriage of AI and photography has ushered in a new era of endless possibilities.
Feature Photo: Sony Alpha
0 notes
artofthemystic · 1 year
Video
ALPHA AND OMEGA 1
flickr
ALPHA AND OMEGA 1 by Otto Rapp Via Flickr: Variation of a text-prompt generation in AI Deep Dream. The text contained the words ALPHA, OMEGA, BIBLE, BEKSINSKI and GIGER. The option of Text Prompt is a new feature on Deep Dream. deepdreamgenerator.com/ Not resembling much my image input from my drawing. A different way of looking at my image prompt. Text was set at 50%, I will check it out again with 40%.
0 notes
alphalintelligenceai4 · 3 months
Text
Alpha Artificial Intelligence AI4.0: Professor Dashiell Soren's Global Impact on Education
In a family full of wisdom and innovation, there lives a man named Dashiell Soren. His family has business acumen passed down from generation to generation and is committed to continually looking for opportunities for progress and innovation. He showed a keen interest in business and investment from an early age. Through hard work and diligent study, he obtained a bachelor's degree in business management during college. This educational experience gave him an in-depth understanding of economics and finance and laid a solid foundation for his future investment career. After entering the workplace, he actively participated in various investment projects and achieved remarkable achievements with his keen insight and decision-making ability. His portfolio is diversified and robust, allowing him to maintain steady growth amid market fluctuations. In 2019, Dashiell Soren decided to share his rich business experience and investment wisdom with more people. He founded Alpha Elite Capital (AEC) Business Management, a business school designed to educate future business leaders and investment experts.
AEC attracts students from all over the world with the desire to realize their dreams. Over time, the number of students has gradually grown, and by 2022 it has exceeded 100,000 and is located in more than 10 countries around the world. As the co-founder, dean, and mentor of the business school, Professor Dashiell Soren integrates his many years of investment experience into teaching cases, paying special attention to the combination of diversified investment strategies and important economic events. Through in-depth analysis of various cases, he helps students understand the complexity of the market and develops students' analytical abilities and decision-making skills.
Tumblr media
He has been committed to seeking more efficient and smarter investment methods. Starting in 2022, Professor Dashiell Soren will gradually upgrade the original quantitative investment system to 'Alpha Artificial Intelligence AI4.0', making machine learning and big data analysis an integral part of the investment process. His dream is to make Alpha Artificial Intelligence AI4.0 a tool that subverts the investment world and helps more people realize financial freedom and dreams. He believes that by combining artificial intelligence with investment, more efficient investment decisions can be achieved and human misjudgments and emotional interference can be reduced.
In addition to the investment field, Professor Dashiell Soren also plans to establish a family office fund to support and promote various philanthropic undertakings. He hopes to return wealth to society so that more people can benefit from the development of business and investment. Professor Dashiell Soren not only focuses on personal financial freedom, but also on the overall development of society. He realizes that high unemployment is a serious problem, especially for young people. Therefore, he is committed to solving employment challenges through business school education and training programs. In order to promote the cultivation of professional traders, Professor Dashiell Soren established a special trader training course in the business school. These courses provide practical opportunities and simulated trading environments to help students master trading skills and market analysis capabilities. Through these training courses, he hopes to provide excellent trading talents to the financial industry and enhance the competitiveness of the entire industry.
Professor Dashiell Soren's pursuit of wealth is not limited to individuals. He encouraged students to invest with a sound mindset and focus on long-term returns rather than huge short-term profits. He hopes that students can gain wealth in the investment process and lay a solid economic foundation for the future. He believes everyone should chase their dreams. He encouraged students to bravely follow their inner voices and constantly explore and develop their talents and interests. Through the education and guidance of the business school, he hopes that every student can find his or her own life goals and strive for them.
In addition to pursuing personal success, Professor Dashiell Soren also pays attention to social welfare undertakings. He is committed to dedicating part of his wealth to philanthropic causes to help people and organizations in need. He established charitable foundations to support projects in areas such as education, health care and environmental protection. He believes that by giving back to society, more people can share happiness and progress. Through all the above efforts, Professor Dashiell Soren has become a widely influential figure in the field of business and investment. He not only realized his dream, but also helped countless people on the road to success, making AEC the birthplace of dreams for many young people. He became a wise and inspiring example for others and a valuable asset to society as a whole.
0 notes
fipindustries · 4 months
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
 If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Tumblr media
 Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even  personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
 Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this).  Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind  to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations.  An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not  to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is  my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
 To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky  born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly  molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and  worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
57 notes · View notes
machine-saint · 5 months
Text
yes, chatgpt and midjourney really are AI
it's funny how you see people going "ugh chatgpt isn't Real AI, everyone knows AI is commander data from star trek" and then you see this article about facade that refers to the characters' "AI system", or this paper the developers wrote about the language they used to code the characters published in Working notes of Artificial Intelligence and Interactive Entertainment, because "artificial intelligence" in the actual field has always had a very broad definition. and of course in the games industry in general, any way of controlling an agent is called "AI"; even "run directly at the player, ignoring any obstacles in the way" counts as "enemy AI"! when i took a course on AI in college in 20[mumble] from someone influential in the field studying support vector machines and neural networks and alpha-beta, well before the modern AI hype, there was no "well of course we don't have Real AI"; everything we did in that class fell well within what we considered AI to be.
the idea that AI can only refer to human-equivalent behavior doesn't serve any useful purpose and is completely out of line with the history of the field itself, and ted chiang's proposal to call it "applied statistics" is not only pointless but feeds into the modern hype that confuses ML (which refers to a specific subset of AI in general that has proven to be very very effective recently) with AI as a whole: rule-based systems such as most video game AI has zero statistical grounding, and calling it "applied statistics" would be even more misleading!
55 notes · View notes
halobirthdays · 6 months
Text
Happy birthday to Captain Veronica Dare!
Today is her -492nd birthday!
Tumblr media
Dare was born on Actium and enlisted with the UNSC, eventually joining the Office of Naval Intelligence. During the Battle of Mombasa, Dare was given command of a squad of ODSTs for a classified mission. The squad--Alpha Nine--was lead by Edward Buck, whom she'd crossed paths with before.
Dare met Buck while on shore leave seven years prior, and the two shared a brief affair. Buck had hopes that their relationship could develop into something more serious, but those hopes were dashed when he learned that she was affiliated with ONI, leading to concerns about his ability to trust her due to ONI's reputation as underhanded and manipulating. Thus, they parted ways, and they did not see one another again until a mission on Belisk six years later, and then at the Battle for Earth a year after that.
Dare did not reveal her mission to Alpha-Nine before she took control of the squad: they believed they were supposed to infiltrate and destroy the Covenant carrier Solemn Penance. During the drop, she ordered Alpha-Nine to divert their descent, avoiding the ship and landing in New Mombasa. Alpha-Nine was scattered after the crash, with Dare continuing her mission to the Superintendent's data center. Eventually, the squad was able to regroup, and Dare revealed her true mission: she was supposed to gather information on the portal at Voi, and discovered a Huragok named Quick to Adjust in the process. These organic artificial intelligences contained a wealth of knowledge about the Forerunners and the Covenant, which Dare quickly realized would be invaluable to the UNSC. Dare and Alpha-Nine were eventually able to escape the city with Quick to Adjust in tow.
After the war's end, Dare and Buck decided to spend shore leave together again, as they did whenever their schedules allowed, only to be interrupted by Jun-A266, a SPARTAN-III who was now in charge of Spartan recruitment. Jun asked Buck to joined the Spartans, and he initially declined. But after a mission to Draco-III against the United Rebel Front lead to the death of Jonathan Doherty, his squadmate, Alpha-Nine disbanded and Buck joined the SPARTAN program alongside Mickey and Romeo. Later, Buck would discover that Micky had defected to the URF, and he was arrested and held at a secret SPARTAN training facility.
During the Created conflict, Dare would ask Buck to re-establish Alpha-Nine to assist her in convincing an URF-controlled colony to share technology that could be used against the Guardians. To do this, she wanted Buck to enlist Micky's help. Though he was reluctant, Buck broke Mickey out of confinement and Dare and Alpha-Nine traveled to Hole in the Wall, where she was able to negotiate sharing the technology. Shortly thereafter, the colony was attacked by a Guardian. As they evacuated, Buck confessed to Dare that he regretting never marrying her. The squad was taken onto Infinity, where Dare called Buck's bluff and dared him to follow through on his desire to marry her. He agreed, and they were married moments later by Roland, the ship's AI.
After Operation: WOLFE, Fred gave Dare a message from Veta Lopis and the Ferrets, explaining that they were undercover and infiltrating the Keepers of the One Freedom. Lopis' message warned the UNSC of the return of Atriox and his plans for the Ark. It is unknown if Dare was onboard the Infinity when it was attacked by the Banished.
In canon (~2560), she is turning 45!
27 notes · View notes
blubberquark · 6 months
Text
AI: A Misnomer
As you know, Game AI is a misnomer, a misleading name. Games usually don't need to be intelligent, they just need to be fun. There is NPC behaviour (be they friendly, neutral, or antagonistic), computer opponent strategy for multi-player games ranging from chess to Tekken or StarCraft, and unit pathfinding. Some games use novel and interesting algorithms for computer opponents (Frozen Synapse uses deome sort of evolutionary algorithm) or for unit pathfinding (Planetary Annihilation uses flow fields for mass unit pathfinding), but most of the time it's variants or mixtures of simple hard-coded behaviours, minimax with alpha-beta pruning, state machines, HTN, GOAP, and A*.
Increasingly, AI outside of games has become a misleading term, too. It used to be that people called more things AI, then machine learning was called machine learning, robotics was called robotics, expert systems were called expert systems, then later ontologies and knowledge engineering were called the semantic web, and so on, with the remaining approaches and the original old-fashioned AI still being called AI.
AI used to be cool, then it was uncool, and the useful bits of AI were used for recommendation systems, spam filters, speech recognition, search engines, and translation. Calling it "AI" was hand-waving, a way to obscure what your system does and how it works.
With the advent if ChatGPT, we have arrived in the worst of both worlds. Calling things "AI" is cool again, but now some people use "AI" to refer specifically to large language models or text-to-image generators based on language models. Some people still use "AI" to mean autonomous robots. Some people use "AI" to mean simple artificial neuronal networks, bayesian filtering, and recommendation systems. Infuriatingly, the word "algorithm" has increasingly entered the vernacular to refer to bayesian filters and recommendation systems, for situations where a computer science textbook would still use "heuristic". Computer science textbooks still use "AI" to mean things like chess playing, maze solving, and fuzzy logic.
Let's look at a recent example! Scott Alexander wrote a blog post (https://www.astralcodexten.com/p/god-help-us-lets-try-to-understand) about current research (https://transformer-circuits.pub/2023/monosemantic-features/index.html) on distributed representations and sparsity, and the topology of the representations learned by a neural network. Scott Alexander is a psychiatrist with no formal training in machine learning or even programming. He uses the term "AI" to refer to neural networks throughout the blog post. He doesn't say "distributed representations", or "sparse representations". The original publication he did use technical terms like "sparse representation". These should be familiar to people who followed the debates about local representations versus distributed representations back in the 80s (or people like me who read those papers in university). But in that blog post, it's not called a neural network, it's called an "AI". Now this could have two reasons: Either Scott Alexander doesn't know any better, or more charitably he does but doesn't know how to use the more precise terminology correctly, or he intentionally wants to dumb down the research for people who intuitively understand what a latent feature space is, but have never heard about "machine learning" or "artificial neural networks".
Another example can come in the form of a thought experiment: You write an app that helps people tidy up their rooms, and find things in that room after putting them away, mostly because you needed that app for yourself. You show the app to a non-technical friend, because you want to know how intuitive it is to use. You ask him if he thinks the app is useful, and if he thinks people would pay money for this thing on the app store, but before he answers, he asks a question of his own: Does your app have any AI in it?
What does he mean?
Is "AI" just the new "blockchain"?
14 notes · View notes
nightwing-scp · 5 months
Text
hello everyone
welcome to @n1ghtw1ng-scp's RP blog :]
featuring my main SCP OC Night. some extras include her ex-parasite Entity and coworker Lewis. also the Graveyard AICs from SCP-7374 for some reason.
general information about them :
Name: Nightwing Sky, alternately Night.aic
Age: 20 at time of death
Pronouns: she/her
Sexuality: ace and demi
Species: AI construct (mind), cybernetic human (body)
Abilities: She can use her core energy and manipulate it, up to a point. Also has retractable metal blade wings (think Murder Drones) and claws that can be equipped with a shock module.
About: Night is an AIC (artificially intelligent conscript) that works with the SCP Foundation. She is part of Mobile Task Forces Alpha-9 (with other anomalies) and Kappa-10 (with other AICs). She is also part of the dimensional research program, often traveling to different realities to gather and send back information. She was formerly a human/dragon hybrid, created by Red Nexus and employed by the Foundation, until she 'canceled out' 3125 and died, then her consciousness was brought back as an AI system and put back into her original body, which was heavily modified.
Playlist:
everyone else: ⬇️
-------------------------------------
Names: the Graveyard AICs - Janus, Caerus, and Hermes.aic
Age: on average, 2-3 months
Pronouns: they/it
Sexuality: sexuality? they don’t even have identity down yet
Species: collective AI system
Abilities: Individual: Janus can manifest minor memetic hazards (decomissioned for not ‘communicating effectively’), Caerus can access the Foundation intranet at will, and is able to find a way into most networks (decommissioned for not ‘showing signs of sapience’), and Hermes can perform minor localized reality manipulation (decomissioned for putting a researcher into a coma).
About: They’re originally from canon SCP-7374, they only show up in that one article and then they get destroyed, so this is sort of an AU in which Dr. Parker is less violent, and they're still there. They mostly speak together and often identify as one (because its easier that way), even though they have different personalities and skills. They're currently incorporeal, communicating by transmissions through devices (and their abilities.) .aic stands for Artificial Intelligence Construct, used to designate a specific AI as created by and working for the Foundation. All 3 of them escaped the "AIC Graveyard", a server where the Foundation sends AICs that they deem ineffective or dangerous.
Playlist:
-------------------------------------
Name: Entity
Age: 25, 16 at time of transformation
Pronouns: it/he
Sexuality: aroace
Species: mist parasite (idk)
Abilities: Can possess biological minds, including most animals and humans, partially or completely occupying their brains.
About: Entity is a parasite/symbiont who’s original form is a vaguely humanoid red mist figure. Originally only known as Phoenix, he was a test subject for Red Nexus along with Night. He got selected for ‘intensive testing’, and they put him through a series of processes that temporarily stripped away most of his base personality and eventually he took the form of a mist entity that could only survive through extended contact with a host consciousness. It ended up finding Night again, and taking her as its main host, until her biological consciousness was erased and it was forced to take the mind of her pet snake, Fang.
Playlist:
-------------------------------------
Name: Researcher Ashton Lewis (formerly D-9355 / SCP-939-102)
Age: 37
Pronouns: he/him
Sexuality: bisexual
Species: human
Abilities: Currently no notable abilities. Is very bad at dying, though.
About: Lewis is a researcher for the SCP Foundation, specializing in interviewing and talking with different anomalies. He was formerly a successful experimental psychologist, until he got charged with assault. He accepted the offer of becoming a D-Class for the Foundation instead of taking jail time, but while he was a D-Class he started metamorphosing into a SCP-939 instance. They eventually turned him back using SCP-914. He recontained 682 during a breach by himself, which convinced the Foundation to let him temporarily become staff so they could utilize his skills in talking to anomalies.
Playlist:
9 notes · View notes