Tumgik
#000 cars
en-wheelz-me · 3 months
Text
Tumblr media
10 notes · View notes
outism-had-a-purpose · 8 months
Text
Tumblr media
I’m gonna walk into the woods and never fucking return
74 notes · View notes
erikisser · 6 months
Text
I LA LA LIKE ME WHEN I ROLL 🏍 LA LA LIKE ME WHEN I'M SAVAGE 😈 LA LA LIKE ME WHEN I GO 🚶🏻‍♀️ LA LA LIKE ME WHEN I'M BADDEST 💅🏻
15 notes · View notes
criticalfai1ure · 9 months
Text
Tumblr media Tumblr media Tumblr media
11 notes · View notes
jcmarchi · 5 months
Text
Nuclear Power Renaissance with Molten Salts - Technology Org
New Post has been published on https://thedigitalinsider.com/nuclear-power-renaissance-with-molten-salts-technology-org/
Nuclear Power Renaissance with Molten Salts - Technology Org
A science team is reinventing nuclear energy systems via molten salt technologies.
A retro wonder gleaming white in the sun, propelled by six rear-facing rotors and four jet engines affixed to the longest wings ever produced for a combat aircraft, the Convair B-36 Peacemaker looks like it flew right out of a 1950s science fiction magazine.
Frozen uranium containing fuel salt (NaF-BeF2-UF4), inside a glovebox in Raluca Scarlat’s SALT lab. Illustration by Sasha Kennedy/UC Berkeley
One of these bombers, which flew over the American Southwest from 1955 to 1957, was unique. It bore the fan-like symbol for ionizing radiation on its tail. The NB-36H prototype was designed to be powered by a molten salt nuclear reactor — a lightweight alternative to a water-cooled reactor.
Nuclear-propelled aircraft like the NB-36H were intended to fly for weeks or months without stopping, landing only when the crew ran short of food and supplies. So what happened? Why weren’t the skies filled with these fantastical aircraft?
“The problem was that nuclear-powered airplanes are absolutely crazy,” says Per F. Peterson, the William S. Floyd and Jean McCallum Floyd Chair in Nuclear Engineering. “The program was canceled, but the large thermal power to low-weight ratio in molten salt reactors is the reason that they remain interesting today.”
Because of numerous concerns, including possible radioactive contamination in the event of a crash, the idea of nuclear-powered aircraft never took off. But nuclear submarines, using water as coolant, completely replaced their combustion-powered predecessors. Civilian reactors were built on the success of submarine systems, and as a result, most nuclear reactors today are cooled with water.
Professor Per Peterson holds a single fuel pebble, which can produce enough electricity to power a Tesla Model 3 for 44,000 miles. Illustration by Adam Lau / Berkeley Engineering
While most water-cooled reactors can safely and reliably generate carbon-free electricity for decades, they do present numerous challenges in terms of upfront cost and efficiency.
Molten salt reactors, like those first designed for nuclear-powered aircraft, address many of the inherent challenges with water-cooled reactors. The high-temperature reaction of such reactors could potentially generate much more energy than water-cooled reactors, hastening efforts to phase out fossil fuels.
Now, at the Department of Nuclear Engineering, multiple researchers, including Peterson, are working to revisit and reinvent molten salt technologies, paving the way for advanced nuclear energy systems that are safer, more efficient and cost-effective — and may be a key for realizing a carbon-free future.
Smaller, safer reactors
In the basement of Etcheverry Hall, there’s a two-inch-thick steel door that looks like it might belong on a bank vault. These days, the door is mostly left open, but for two decades it was the portal between the university and the Berkeley Research Reactor, used mainly for training. In 1966, the reactor first achieved a steady-state of nuclear fission.
Fission occurs when the nucleus of an atom absorbs a neutron and breaks apart, transforming itself into lighter elements. Radioactive elements like uranium naturally release neutrons, and a nuclear reactor harnesses that process.
Concentrated radioactive elements interact with neutrons, splitting themselves apart, shooting more neutrons around and splitting more atoms. This self-sustaining chain reaction releases immense amounts of energy in the form of radiation and heat. The heat is transferred to water that propels steam turbines that generate electricity.
The reactor in Etcheverry Hall is long gone, but the gymnasium-sized room now houses experiments designed to test cooling and control systems for molten salt reactors. Peterson demonstrated one of these experiments in August. The Compact Integral Effects Test (CIET) is a 30-foot-tall steel tower packed with twisting pipes.
The apparatus uses heat transfer oil to model the circulation of molten salt coolant between a reactor core and its heat exchange system. CIET is contributing extensively to the development of passive safety systems for nuclear reactors.
After a fission reaction is shut down, such systems allow for the removal of residual heat caused by radioactive decay of fission products without any electrical power — one of the main safety features of molten salt reactors.
The first molten salt reactor tested at Oak Ridge National Laboratory in the 1950s was small enough to fit in an airplane, and the new designs being developed today are not much larger.
Conventional water-cooled reactors are comparatively immense — the energy-generating portion of the Diablo Canyon Power Plant in San Luis Obispo County occupies approximately 12 acres, and containment of feedwater is not the only reason why.
The core temperature in this type of reactor is usually kept at some 300 degrees Celsius, which requires 140 atmospheres of pressure to keep the water liquid. This need to pressurize the coolant means that the reactor must be built with robust, thick-walled materials, increasing both size and cost. Molten salts don’t require pressurization because they boil at much higher temperatures.
In conventional reactors, water coolant can boil away in an accident, potentially causing the nuclear fuel to meltdown and damage the reactor. Because the boiling point of molten salts are higher than the operational temperature of the reactor, meltdowns are extremely unlikely.
Even in the event of an accident, the molten salt would continue to remove heat without any need for electrical power to cycle the coolant — a requirement in conventional reactors.
“Molten salts, because they can’t boil away, are intrinsically appealing, which is why they’re emerging as one of the most important technologies in the field of nuclear energy,” says Peterson.
The big prize: efficiency
Assistant professor Raluca Scarlat uses a glovebox in her Etcheverry Hall lab. Illustration by Adam Lau / Berkeley Engineering
To fully grasp the potential benefits of molten salts, one has to visit the labs of the SALT Research Group. Raluca O. Scarlat, assistant professor of nuclear engineering, is the principal investigator for the group’s many molten salt studies.
Scarlat’s lab is filled with transparent gloveboxes filled with argon gas. Inside these gloveboxes, Scarlat works with many types of molten salts, including FLiBe, a mixture of beryllium and lithium fluoride. Her team aims to understand exactly how this variety of salt might be altered by exposure to a nuclear reactor core.
On the same day that Peterson demonstrated the CIET test, researchers in the SALT lab were investigating how much tritium (a byproduct of fission) beryllium fluoride could absorb.
Salts are ionic compounds, meaning that they contain elements that have lost electrons and other elements that have gained electrons, resulting in a substance that carries no net electric charge. Ionic compounds are very complex and very stable. They can absorb a large range of radioactive elements.
This changes considerations around nuclear waste, especially if the radioactive fuel is dissolved into the molten salt. Waste products could be electrochemically separated from the molten salts, reducing waste volumes and conditioning the waste for geologic disposal.
Waste might not even be the proper term for some of these byproducts, as many are useful for other applications — like tritium, which is a fuel for fusion reactors.
Salts can also absorb a lot of heat. FLiBe remains liquid between approximately 460 degrees and 1460 degrees Celsius. The higher operating temperature of molten salt coolant means more steam generation and more electricity, greatly increasing the efficiency of the reactor, and for Scarlat, efficiency is the big prize.
“If we filled the Campanile with coal and burned it to create electricity, a corresponding volume of uranium fuel would be the size of a tennis ball,” says Scarlat. “Having hope that we can decarbonize and decrease some of the geopolitical issues that come from fossil fuel exploration is very exciting.”
“Finding good compromises”
Thermal efficiency refers to the amount of useful energy produced by a system as compared with the heat put into it. A combustion engine achieves about 20% thermal efficiency. A conventional water-cooled nuclear reactor generally achieves about 32%.
According to Massimiliano Fratoni, Xenel Distinguished Associate Professor in the Department of Nuclear Engineering, a high-temperature, molten salt reactor might achieve 45% thermal efficiency.
So, with all the potential benefits of molten salt reactors, why weren’t they widely adopted years ago? According to Peter Hosemann, Professor and Ernest S. Kuh Chair in Engineering, there’s a significant challenge inherent in molten salt reactors: identifying materials that can withstand contact with the salt.
Anyone who’s driven regularly in a region with icy roads has probably seen trucks and cars with ragged holes eaten in the metal around the wheel wells. Salt spread on roads to melt ice is highly corrosive to metal. A small amount of moisture in the salt coolant of a nuclear reactor could cause similar corrosion, and when combined with extreme heat and high radiation, getting the salt’s chemistry right is even more critical.
Hosemann, a materials scientist, uses electron microscopes to magnify metal samples by about a million times. The samples have been corroded and or irradiated, and Hosemann studies how such damage alters their structures and properties. These experiments may help reactor designers estimate how much corrosion to expect every year in a molten salt reactor housing.
Hosemann says molten salt reactors present special engineering challenges because the salt coolant freezes well above room-temperatures, meaning that repairs must either be done at high temperatures, or the coolant must first be drained.
Commercially successful molten salt reactors then will have to be very reliable, and that won’t be simple. For example, molten salt reactors with liquid fuel may be appealing in terms of waste management, but they also add impurities into the salt that make it more corrosive.
Liquid fuel designs will need to be more robust to counter corrosion, resulting in higher costs, and the radioactive coolant presents further maintenance challenges.
Nuclear engineering graduate students Sasha Kennedy and Nathanael Gardner, from left, work with molten salt. Illustration by Adam Lau/Berkeley Engineering
“Good engineering is always a process of finding good compromises. Even the molten salt reactor, as beautiful as it is, has to make compromises,” says Hosemann.
Peterson thinks the compromise is in making molten salt reactors modular. He was the principal investigator on the Department of Energy-funded Integrated Research Project that conducted molten salt reactor experiments from 2012 to 2018.
His research was spun off into Kairos Power, which he co-founded with Berkeley Engineering alums Edward Blandford (Ph.D.’10 NE) and Mike Laufer (Ph.D.’13 NE), and where Peterson serves as Chief Nuclear Officer.
The U.S. Nuclear Regulatory Commission just completed a review of Kairos Power’s application for a demonstration reactor, Hermes, as a proof of concept. Peterson says that high-temperature parts of Kairos Power’s reactors would likely last for 15 to 25 years before they’d need to be replaced, and because the replacement parts will be lighter than those of conventional reactors, they’ll consume fewer resources.
“As soon as you’re forced to make these high-temperature components replaceable, you’re systematically able to improve them. You’re building improvements, replacing the old parts and testing the new ones, iteratively getting better and better,” says Peterson.
Lowering energy costs
California is committed to reaching net zero carbon emissions by 2045. It’s tempting to assume that this goal can be reached with renewables alone, but electricity demand doesn’t follow peak energy generating times for renewables. 
Natural gas power surges in the evenings as renewable energy wanes. Even optimistic studies on swift renewable energy adoption in California still assume that some 10% of energy requirements won’t be achieved with renewables and storage alone.
Considering the increasing risks to infrastructure in California from wildfires and intensifying storms, it’s likely that non-renewable energy sources will still be needed to meet the state’s energy needs.
Engineers in the Department of Nuclear Engineering expect that nuclear reactors will make more sense than natural gas for future non-renewable energy needs because they produce carbon-free energy at a lower cost. In 2022, the price of natural gas in the United States fluctuated from around $2 to $9 per million BTUs.
Peterson notes that energy from nuclear fuel currently costs about 50 cents per million BTUs. If new reactors can be designed with high intrinsic safety and lower construction and operating costs, nuclear energy might be even more affordable.
Molten salt sits on a microscope stage in professor Raluca Scarlat’s lab. Illustration by Adam Lau/Berkeley Engineering
Even if molten salt reactors do not end up replacing natural gas, Hosemann says the research will still prove valuable. He points to other large-scale scientific and engineering endeavors like fusion reactors, which in 60 years of development have never been used commercially but have led to other breakthroughs.
“Do I think we’ll have fusion-generated power in our homes in the next five years? Absolutely not. But it’s still valuable because it drives development of superconductors, plasmas and our understanding of materials in extreme environments, which today get used in MRI systems and semiconductor manufacturing,” says Hosemann. “Who knows what we’ll find as we study molten salt reactors?”
Source: UC Berkeley
You can offer your link to a page which is relevant to the topic of this post.
3 notes · View notes
Text
Prometheus: Modern Frankenstein Henry has one of THESE BAD BOYS
Tumblr media Tumblr media
10 notes · View notes
pizzatheif · 2 years
Text
anyway. lily has her first day of preschool tomorrow and i’m a mess abt it.
7 notes · View notes
strcngergirls · 1 year
Text
@theolderh3nderson​ asked: "how could you possibly think that was a good idea?" 
Tumblr media
“I dunno.” She shrugged, flicking her gaze between Elie and Steve’s mangled car. It wasn’t like Max to panic externally, but this circumstance definitely called for some anxiety. She chewed on her fingernail as she observed the damage, hoping Elie could do anything to help her. The hood was crunched up, the metal rippled and engulfing the mailbox she’d run into. Maybe he wouldn’t notice. Or maybe he’d kill her when he found out. “He said he’d teach me, but never lets me drive it. I thought I could just figure it out on my own..”
Tumblr media
5 notes · View notes
tangerinesour · 1 year
Text
fun fact, the drive between my office and my mom’s house is just long enough to get through tange’s minegishi v. white death exposition, plus netflix still plays when you lock your phone and that’s 💯👌🏻
2 notes · View notes
en-wheelz-me · 3 months
Text
Tumblr media
12 notes · View notes
foxcassius · 1 year
Text
okay so i put the cash i have left in japan in front of me and stared at it and i think i will be okay so long as sending my suitcases does not cost MORE than ¥12,000. i have to put ¥10,000 back in my bank account for fucking docomo, i need another ¥10,000 to pay for trains to the goddamn airport because transportation in japan is not anywhere near as cheap as the internet would lead you to believe, then the ¥12,000 for having the suitcases taken to my airbnb and i'll have ¥5,000 left to. eat at all until i leave the country. and when i check my second bag i will simply have to use my american debit card bc thats All i have here 👍
#again. i hate altia. i cannot believe that 9 months of work netted me 0 dollars and 0 cents in savings and i didnt even GO anywhere.#i literally have spent the last 9 months in okayama prefecture and osaka. osaka for a TOTAL of like 40 hours around flights.#yeah i went to korea twice but MY BOYFRIEND paid for those flights. you know what i paid for? japanese trains to the aiport.#which cost as much as the flights.#i hate altia. shit ass wage for real. i dont even know how the little fresh out of collegers do it.#like i have no money. i dont spend on stuff. i didnt buy my niche fashion or whatever. i LEFT my expensive niche fashion. i solf#*sold items from my expensive niche fashion. i have barely survived.#i dont know how Anyone does it i genuinely think i must be stupid i must be ass with money or something#my '''¥240 000''' paycheck was at ¥140 000 or less by the time it hit my bank account after altia was done skimming it for themselves#and then paying for gas in THEIR car to go to my job i do FOR THEM and CRAZY EXPENSIVE utilities in the apartment THEY PUT ME IN#would always have me down to like ¥80 000 in a good month to like eat and enjoy myself with?#but i also did have to send money home because japanese bank accounts are miserable and you cant use them for anything#so i'm eating off of ¥1 000 per day for breakfast and my homemade bento lunches AND dinner#and then when i was lucky i would go to okayama city and have one nice meal with my friends on the weekend#but going to the city costs fucking ¥2 000 so is it worth it?#i dont think this is a good job and genuinely i dont even think the fresh graduates should be doing it#if you want to delay your future this is the job for you. altia misleads you on their website and gives you half-truths in interviews.#dont work there.#t
4 notes · View notes
cithaerons · 2 years
Text
i feel a lot better now….. honestly it felt ok, not GREAT but like i was doing something i knew how to do, not like starting back from square 0.
3 notes · View notes
erikisser · 2 months
Text
couches are really expensive you guys
4 notes · View notes
criticalfai1ure · 11 months
Text
lily: mommy, why do you like bad guys so much?
me: because…
me:
me: … they’re fun.
10 notes · View notes
jcmarchi · 18 days
Text
Jay Dawani is Co-founder & CEO of Lemurian Labs – Interview Series
New Post has been published on https://thedigitalinsider.com/jay-dawani-is-co-founder-ceo-of-lemurian-labs-interview-series/
Jay Dawani is Co-founder & CEO of Lemurian Labs – Interview Series
Jay Dawani is Co-founder & CEO of Lemurian Labs. Lemurian Labs is on a mission to deliver affordable, accessible, and efficient AI computers, driven by the belief that AI should not be a luxury but a tool accessible to everyone. The founding team at Lemurian Labs combines expertise in AI, compilers, numerical algorithms, and computer architecture, united by a single purpose: to reimagine accelerated computing.
Can you walk us through your background and what got you into AI to begin with?
Absolutely. I’d been programming since I was 12 and building my own games and such, but I actually got into AI when I was 15 because of a friend of my fathers who was into computers. He fed my curiosity and gave me books to read such as Von Neumann’s ‘The Computer and The Brain’, Minsky’s ‘Perceptrons’, Russel and Norvig’s ‘AI A Modern Approach’. These books influenced my thinking a lot and it felt almost obvious then that AI was going to be transformative and I just had to be a part of this field. 
When it came time for university I really wanted to study AI but I didn’t find any universities offering that, so I decided to major in applied mathematics instead and a little while after I got to university I heard about AlexNet’s results on ImageNet, which was really exciting. At that time I had this now or never moment happen in my head and went full bore into reading every paper and book I could get my hands on related to neural networks and sought out all the leaders in the field to learn from them, because how often do you get to be there at the birth of a new industry and learn from its pioneers. 
Very quickly I realized I don’t enjoy research, but I do enjoy solving problems and building AI enabled products. That led me to working on autonomous cars and robots, AI for material discovery, generative models for multi-physics simulations, AI based simulators for training professional racecar drivers and helping with car setups, space robots, algorithmic trading, and much more. 
Now, having done all that, I’m trying to reign in the cost of AI training and deployments because that will be the greatest hurdle we face on our path to enabling a world where every person and company can have access to and benefit from AI in the most economical way possible.
Many companies working in accelerated computing have founders that have built careers in semiconductors and infrastructure. How do you think your past experience in AI and mathematics impacts your ability to understand the market and compete effectively?
I actually think not coming from the industry gives me the benefit of having the outsider advantage. I have found it to be the case quite often that not having knowledge of industry norms or conventional wisdoms gives one the freedom to explore more freely and go deeper than most others would because you’re unencumbered by biases. 
I have the freedom to ask ‘dumber’ questions and test assumptions in a way that most others wouldn’t because a lot of things are accepted truths. In the past two years I’ve had several conversations with folks within the industry where they are very dogmatic about something but they can’t tell me the provenance of the idea, which I find very puzzling. I like to understand why certain choices were made, and what assumptions or conditions were there at that time and if they still hold. 
Coming from an AI background I tend to take a software view by looking at where the workloads today, and here are all the possible ways they may change over time, and modeling the entire ML pipeline for training and inference to understand the bottlenecks, which tells me where the opportunities to deliver value are. And because I come from a mathematical background I like to model things to get as close to truth as I can, and have that guide me. For example, we have built models to calculate system performance for total cost of ownership and we can measure the benefit we can bring to customers with software and/or hardware and to better understand our constraints and the different knobs available to us, and dozens of other models for various things. We are very data driven, and we use the insights from these models to guide our efforts and tradeoffs. 
It seems like progress in AI has primarily come from scaling, which requires exponentially more compute and energy. It seems like we’re in an arms race with every company trying to build the biggest model, and there appears to be no end in sight. Do you think there is a way out of this?
There are always ways. Scaling has proven extremely useful, and I don’t think we’ve seen the end yet. We will very soon see models being trained with a cost of at least a billion dollars. If you want to be a leader in generative AI and create bleeding edge foundation models you’ll need to be spending at least a few billion a year on compute. Now, there are natural limits to scaling, such as being able to construct a large enough dataset for a model of that size, getting access to people with the right know-how, and getting access to enough compute. 
Continued scaling of model size is inevitable, but we also can’t turn the entire earth’s surface into a planet sized supercomputer to train and serve LLMs for obvious reasons. To get this into control we have several knobs we can play with: better datasets, new model architectures, new training methods, better compilers, algorithmic improvements and exploitations, better computer architectures, and so on. If we do all that, there’s roughly three orders of magnitude of improvement to be found. That’s the best way out. 
You are a believer in first principles thinking, how does this mold your mindset for how you are running Lemurian Labs?
We definitely employ a lot of first principles thinking at Lemurian. I have always found conventional wisdom misleading because that knowledge was formed at a certain point in time when certain assumptions held, but things always change and you need to retest assumptions often, especially when living in such a fast paced world. 
I often find myself asking questions like “this seems like a really good idea, but why might this not work”, or “what needs to be true in order for this to work”, or “what do we know that are absolute truths and what are the assumptions we’re making and why?”, or “why do we believe this particular approach is the best way to solve this problem”. The goal is to invalidate and kill off ideas as quickly and cheaply as possible. We want to try and maximize the number of things we’re trying out at any given point in time. It’s about being obsessed with the problem that needs to be solved, and not being overly opinionated about what technology is best. Too many folks tend to overly focus on the technology and they end up misunderstanding customers’ problems and miss the transitions happening in the industry which could invalidate their approach resulting in their inability to adapt to the new state of the world.
But first principles thinking isn’t all that useful by itself. We tend to pair it with backcasting, which basically means imagining an ideal or desired future outcome and working backwards to identify the different steps or actions needed to realize it. This ensures we converge on a meaningful solution that is not only innovative but also grounded in reality. It doesn’t make sense to spend time coming up with the perfect solution only to realize it’s not feasible to build because of a variety of real world constraints such as resources, time, regulation, or building a seemingly perfect solution but later on finding out you’ve made it too hard for customers to adopt.
Every now and then we find ourselves in a situation where we need to make a decision but have no data, and in this scenario we employ minimum testable hypotheses which give us a signal as to whether or not something makes sense to pursue with the least amount of energy expenditure. 
All this combined is to give us agility, rapid iteration cycles to de-risk items quickly, and has helped us adjust strategies with high confidence, and make a lot of progress on very hard problems in a very short amount of time. 
Initially, you were focused on edge AI, what caused you to refocus and pivot to cloud computing?
We started with edge AI because at that time I was very focused on trying to solve a very particular problem that I had faced in trying to usher in a world of general purpose autonomous robotics. Autonomous robotics holds the promise of being the biggest platform shift in our collective history, and it seemed like we had everything needed to build a foundation model for robotics but we were missing the ideal inference chip with the right balance of throughput, latency, energy efficiency, and programmability to run said foundation model on.
I wasn’t thinking about the datacenter at this time because there were more than enough companies focusing there and I expected they would figure it out. We designed a really powerful architecture for this application space and were getting ready to tape it out, and then it became abundantly clear that the world had changed and the problem truly was in the datacenter. The rate at which LLMs were scaling and consuming compute far outstrips the pace of progress in computing, and when you factor in adoption it starts to paint a worrying picture. 
It felt like this is where we should be focusing our efforts, to bring down the energy cost of AI in datacenters as much as possible without imposing restrictions on where and how AI should evolve. And so, we got to work on solving this problem. 
Can you share the genesis story of Co-Founding Lemurian Labs?
The story starts in early 2018. I was working on training a foundation model for general purpose autonomy along with a model for generative multiphysics simulation to train the agent in and fine-tune it for different applications, and some other things to help scale into multi-agent environments. But very quickly I exhausted the amount of compute I had, and I estimated needing more than 20,000 V100 GPUs. I tried to raise enough to get access to the compute but the market wasn’t ready for that kind of scale just yet. It did however get me thinking about the deployment side of things and I sat down to calculate how much performance I would need for serving this model in the target environments and I realized there was no chip in existence that could get me there. 
A couple of years later, in 2020, I met up with Vassil – my eventual cofounder – to catch up and I shared the challenges I went through in building a foundation model for autonomy, and he suggested building an inference chip that could run the foundation model, and he shared that he had been thinking a lot about number formats and better representations would help in not only making neural networks retain accuracy at lower bit-widths but also in creating more powerful architectures. 
It was an intriguing idea but was way out of my wheelhouse. But it wouldn’t leave me, which drove me to spending months and months learning the intricacies of computer architecture, instruction sets, runtimes, compilers, and programming models. Eventually, building a semiconductor company started to make sense and I had formed a thesis around what the problem was and how to go about it. And, then towards the end of the year we started Lemurian. 
You’ve spoken previously about the need to tackle software first when building hardware, could you elaborate on your views of why the hardware problem is first and foremost a software problem?
What a lot of people don’t realize is that the software side of semiconductors is much harder than the hardware itself. Building a useful computer architecture for customers to use and get benefit from is a full stack problem, and if you don’t have that understanding and preparedness going in, you’ll end up with a beautiful looking architecture that is very performant and efficient, but totally unusable by developers, which is what is actually important. 
There are other benefits to taking a software first approach as well, of course, such as faster time to market. This is crucial in today’s fast moving world where being too bullish on an architecture or feature could mean you miss the market entirely. 
Not taking a software first view generally results in not having derisked the important things required for product adoption in the market, not being able to respond to changes in the market for example when workloads evolve in an unexpected way, and having underutilized hardware. All not great things. That’s a big reason why we care a lot about being software centric and why our view is that you can’t be a semiconductor company without really being a software company. 
Can you discuss your immediate software stack goals?
When we were designing our architecture and thinking about the forward looking roadmap and where the opportunities were to bring more performance and energy efficiency, it started becoming very clear that we were going to see a lot more heterogeneity which was going to create a lot of issues on software. And we don’t just need to be able to productively program heterogeneous architectures, we have to deal with them at datacenter scale, which is a challenge the likes of which we haven’t encountered before. 
This got us concerned because the last time we had to go through a major transition was when the industry moved from single-core to multi-core architectures, and at that time it took 10 years to get software working and people using it. We can’t afford to wait 10 years to figure out software for heterogeneity at scale, it has to be sorted out now. And so, we got to work on understanding the problem and what needs to exist in order for this software stack to exist. 
We are currently engaging with a lot of the leading semiconductor companies and hyperscalers/cloud service providers and will be releasing our software stack in the next 12 months. It is a unified programming model with a compiler and runtime capable of targeting any kind of architecture, and orchestrating work across clusters composed of different kinds of hardware, and is capable of scaling from a single node to a thousand node cluster for the highest possible performance.
Thank you for the great interview, readers who wish to learn more should visit Lemurian Labs.
0 notes
pizzatheif · 2 years
Text
coming to the dmv with a 3yo was probably a mistake.
6 notes · View notes