Tumgik
sunaleisocial · 2 days
Text
HPI-MIT design research collaboration creates powerful teams
New Post has been published on https://sunalei.org/news/hpi-mit-design-research-collaboration-creates-powerful-teams/
HPI-MIT design research collaboration creates powerful teams
Tumblr media
The recent ransomware attack on ChangeHealthcare, which severed the network connecting health care providers, pharmacies, and hospitals with health insurance companies, demonstrates just how disruptive supply chain attacks can be. In this case, it hindered the ability of those providing medical services to submit insurance claims and receive payments.
This sort of attack and other forms of data theft are becoming increasingly common and often target large, multinational corporations through the small and mid-sized vendors in their corporate supply chains, enabling breaks in these enormous systems of interwoven companies.
Cybersecurity researchers at MIT and the Hasso Plattner Institute (HPI) in Potsdam, Germany, are focused on the different organizational security cultures that exist within large corporations and their vendors because it’s that difference that creates vulnerabilities, often due to the lack of emphasis on cybersecurity by the senior leadership in these small to medium-sized enterprises (SMEs).
Keri Pearlson, executive director of Cybersecurity at MIT Sloan (CAMS); Jillian Kwong, a research scientist at CAMS; and Christian Doerr, a professor of cybersecurity and enterprise security at HPI, are co-principal investigators (PIs) on the research project, “Culture and the Supply Chain: Transmitting Shared Values, Attitudes and Beliefs across Cybersecurity Supply Chains.”
Their project was selected in the 2023 inaugural round of grants from the HPI-MIT Designing for Sustainability program, a multiyear partnership funded by HPI and administered by the MIT Morningside Academy for Design (MAD). The program awards about 10 grants annually of up to $200,000 each to multidisciplinary teams with divergent backgrounds in computer science, artificial intelligence, machine learning, engineering, design, architecture, the natural sciences, humanities, and business and management. The 2024 Call for Applications is open through June 3.
Designing for Sustainability grants support scientific research that promotes the United Nations’ Sustainable Development Goals (SDGs) on topics involving sustainable design, innovation, and digital technologies, with teams made up of PIs from both institutions. The PIs on these projects, who have common interests but different strengths, create more powerful teams by working together.
Transmitting shared values, attitudes, and beliefs to improve cybersecurity across supply chains
The MIT and HPI cybersecurity researchers say that most ransomware attacks aren’t reported. Smaller companies hit with ransomware attacks just shut down, because they can’t afford the payment to retrieve their data. This makes it difficult to know just how many attacks and data breaches occur. “As more data and processes move online and into the cloud, it becomes even more important to focus on securing supply chains,” Kwong says. “Investing in cybersecurity allows information to be exchanged freely while keeping data safe. Without it, any progress towards sustainability is stalled.”
One of the first large data breaches in the United States to be widely publicized provides a clear example of how an SME cybersecurity can leave a multinational corporation vulnerable to attack. In 2013, hackers entered the Target Corporation’s own network by obtaining the credentials of a small vendor in its supply chain: a Pennsylvania HVAC company. Through that breach, thieves were able to install malware that stole the financial and personal information of 110 million Target customers, which they sold to card shops on the black market.
To prevent such attacks, SME vendors in a large corporation’s supply chain are required to agree to follow certain security measures, but the SMEs usually don’t have the expertise or training to make good on these cybersecurity promises, leaving their own systems, and therefore any connected to them, vulnerable to attack.
“Right now, organizations are connected economically, but not aligned in terms of organizational culture, values, beliefs, and practices around cybersecurity,” explains Kwong. “Basically, the big companies are realizing the smaller ones are not able to implement all the cybersecurity requirements. We have seen some larger companies address this by reducing requirements or making the process shorter. However, this doesn’t mean companies are more secure; it just lowers the bar for the smaller suppliers to clear it.”
Pearlson emphasizes the importance of board members and senior management taking responsibility for cybersecurity in order to change the culture at SMEs, rather than pushing that down to a single department, IT office, or in some cases, one IT employee.
The research team is using case studies based on interviews, field studies, focus groups, and direct observation of people in their natural work environments to learn how companies engage with vendors, and the specific ways cybersecurity is implemented, or not, in everyday operations. The goal is to create a shared culture around cybersecurity that can be adopted correctly by all vendors in a supply chain.
This approach is in line with the goals of the Charter of Trust Initiative, a partnership of large, multinational corporations formed to establish a better means of implementing cybersecurity in the supply chain network. The HPI-MIT team worked with companies from the Charter of Trust and others last year to understand the impacts of cybersecurity regulation on SME participation in supply chains and develop a conceptual framework to implement changes for stabilizing supply chains.
Cybersecurity is a prerequisite needed to achieve any of the United Nations’ SDGs, explains Kwong. Without secure supply chains, access to key resources and institutions can be abruptly cut off. This could include food, clean water and sanitation, renewable energy, financial systems, health care, education, and resilient infrastructure. Securing supply chains helps enable progress on all SDGs, and the HPI-MIT project specifically supports SMEs, which are a pillar of the U.S. and European economies.
Personalizing product designs while minimizing material waste
In a vastly different Designing for Sustainability joint research project that employs AI with engineering, “Personalizing Product Designs While Minimizing Material Waste” will use AI design software to lay out multiple parts of a pattern on a sheet of plywood, acrylic, or other material, so that they can be laser cut to create new products in real time without wasting material.
Stefanie Mueller, the TIBCO Career Development Associate Professor in the MIT Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory, and Patrick Baudisch, a professor of computer science and chair of the Human Computer Interaction Lab at HPI, are co-PIs on the project. The two have worked together for years; Baudisch was Mueller’s PhD research advisor at HPI.
Baudisch’s lab developed an online design teaching system called Kyub that lets students design 3D objects in pieces that are laser cut from sheets of wood and assembled to become chairs, speaker boxes, radio-controlled aircraft, or even functional musical instruments. For instance, each leg of a chair would consist of four identical vertical pieces attached at the edges to create a hollow-centered column, four of which will provide stability to the chair, even though the material is very lightweight.
“By designing and constructing such furniture, students learn not only design, but also structural engineering,” Baudisch says. “Similarly, by designing and constructing musical instruments, they learn about structural engineering, as well as resonance, types of musical tuning, etc.”
Mueller was at HPI when Baudisch developed the Kyub software, allowing her to observe “how they were developing and making all the design decisions,” she says. “They built a really neat piece for people to quickly design these types of 3D objects.” However, using Kyub for material-efficient design is not fast; in order to fabricate a model, the software has to break the 3D models down into 2D parts and lay these out on sheets of material. This takes time, and makes it difficult to see the impact of design decisions on material use in real-time.
Mueller’s lab at MIT developed software based on a layout algorithm that uses AI to lay out pieces on sheets of material in real time. This allows AI to explore multiple potential layouts while the user is still editing, and thus provide ongoing feedback. “As the user develops their design, Fabricaide decides good placements of parts onto the user’s available materials, provides warnings if the user does not have enough material for a design, and makes suggestions for how the user can resolve insufficient material cases,” according to the project website.
The joint MIT-HPI project integrates Mueller’s AI software with Baudisch’s Kyub software and adds machine learning to train the AI to offer better design suggestions that save material while adhering to the user’s design intent.
“The project is all about minimizing the waste on these materials sheets,” Mueller says. She already envisions the next step in this AI design process: determining how to integrate the laws of physics into the AI’s knowledge base to ensure the structural integrity and stability of objects it designs.
AI-powered startup design for the Anthropocene: Providing guidance for novel enterprises
Through her work with the teams of MITdesignX and its international programs, Svafa Grönfeldt, faculty director of MITdesignX and professor of the practice in MIT MAD, has helped scores of people in startup companies use the tools and methods of design to ensure that the solution a startup proposes actually fits the problem it seeks to solve. This is often called the problem-solution fit.
Grönfeldt and MIT postdoc Norhan Bayomi are now extending this work to incorporate AI into the process, in collaboration with MIT Professor John Fernández and graduate student Tyler Kim. The HPI team includes Professor Gerard de Melo; HPI School of Entrepreneurship Director Frank Pawlitschek; and doctoral student Michael Mansfeld.
“The startup ecosystem is characterized by uncertainty and volatility compounded by growing uncertainties in climate and planetary systems,” Grönfeldt says. “Therefore, there is an urgent need for a robust model that can objectively predict startup success and guide design for the Anthropocene.”
While startup-success forecasting is gaining popularity, it currently focuses on aiding venture capitalists in selecting companies to fund, rather than guiding the startups in the design of their products, services and business plans.
“The coupling of climate and environmental priorities with startup agendas requires deeper analytics for effective enterprise design,” Grönfeldt says. The project aims to explore whether AI-augmented decision-support systems can enhance startup-success forecasting.
“We’re trying to develop a machine learning approach that will give a forecasting of probability of success based on a number of parameters, including the type of business model proposed, how the team came together, the team members’ backgrounds and skill sets, the market and industry sector they’re working in and the problem-solution fit,” says Bayomi, who works with Fernández in the MIT Environmental Solutions Initiative. The two are co-founders of the startup Lamarr.AI, which employs robotics and AI to help reduce the carbon dioxide impact of the built environment.
The team is studying “how company founders make decisions across four key areas, starting from the opportunity recognition, how they are selecting the team members, how they are selecting the business model, identifying the most automatic strategy, all the way through the product market fit to gain an understanding of the key governing parameters in each of these areas,” explains Bayomi.
The team is “also developing a large language model that will guide the selection of the business model by using large datasets from different companies in Germany and the U.S. We train the model based on the specific industry sector, such as a technology solution or a data solution, to find what would be the most suitable business model that would increase the success probability of a company,” she says.
The project falls under several of the United Nations’ Sustainable Development Goals, including economic growth, innovation and infrastructure, sustainable cities and communities, and climate action.
Furthering the goals of the HPI-MIT Joint Research Program
These three diverse projects all advance the mission of the HPI-MIT collaboration. MIT MAD aims to use design to transform learning, catalyze innovation, and empower society by inspiring people from all disciplines to interweave design into problem-solving. HPI uses digital engineering concentrated on the development and research of user-oriented innovations for all areas of life.
Interdisciplinary teams with members from both institutions are encouraged to develop and submit proposals for ambitious, sustainable projects that use design strategically to generate measurable, impactful solutions to the world’s problems.
0 notes
sunaleisocial · 2 days
Text
One of MIT’s best-kept secrets lives in the Institute’s basement
New Post has been published on https://sunalei.org/news/one-of-mits-best-kept-secrets-lives-in-the-institutes-basement/
One of MIT’s best-kept secrets lives in the Institute’s basement
Tumblr media
When MIT’s Walker Memorial (Building 50) was constructed in 1916, it was among the first buildings located on the Institute’s then-new Cambridge campus. At the time, national headlines would have heralded Gideon Sundback’s invention of the modern zipper, the first transcontinental phone call by Alexander Graham Bell, and Charles Fahbry’s discovery of the ozone layer. It would be another 12 years before the invention of sliced bread, and, importantly, four years before the first U.S.-licensed commercial radio station would go on the air.    
In true MIT fashion, the past, present, and future of Building 50 seem to coexist within its hallways. Today, the basement of Walker Memorial is home to what some students consider to be one of the Institute’s best-kept secrets — something that likely never crossed the minds of its original architects: a 24-hour, high-fidelity radio station. 
Operating under the call sign WMBR 88.1 FM (for “Walker Memorial Basement Radio”), this all-volunteer troupe has endured many hurdles similar to those faced by others in the field as radio itself has largely changed over the years. But as general managers James Rock and Maggie Lin will tell you, there’s something special about this station’s ability to build deeper connections within the larger community.
“Students have the opportunity to get to know a bunch of our community members,” explains Rock. “Our tech director works closely with every student who wants to contribute, which involves anything from manning a drill to climbing to the roof of Walker and manually bending the antenna back into shape, which I did a couple of weeks ago,” laughs Rock. “Most of our student members are trained by someone who’s been around and really knows what they’re doing with radio after decades of experience.”
“It’s really fun,” says Lin. “It’s being able to hang out with people who love music just as much as you do. The older members of the station are such a cool resource for talking about different kinds of music.”
Now sophomores, Rock and Lin first arrived at MIT and WMBR two years ago. At the time, the station was mitigating the effects of the Covid-19 pandemic, during which WMBR went off the air temporarily. “We’ve been general managers since last spring, so the majority of our time at the station has been managing the station,” explains Lin. “We just came at a time when the station didn’t have many student members because of Covid.”
Lin recalls stories from disc jockeys who were at the station the night in 2020 when WMBR went off the air: “I’m told it was extremely sudden. There was someone here who said they finished their show and left a tote bag of records for the next time they were going to come back, and they left … and they still haven’t [returned].” 
However, resilience is a trait that WMBR has displayed in abundance throughout its storied 80-year history. First signing on as WMIT on Nov. 25, 1946, the station’s original equipment was built from the ground up by MIT electrical engineering students. In 1956, when the station’s call letters were licensed to a radio station in North Carolina, the Cambridge-based station became WTBS. And when the station was in dire need of cash for new equipment in the 1970s, its members found a creative solution: an agreement with media mogul Ted Turner to exchange the call letters WTBS for $50,000. This afforded the station the new equipment it dearly needed and allowed Turner to launch the Turner Broadcasting System. The station subsequently became WMBR on Nov. 10, 1979.          
So it’s no surprise how station members responded to the challenges posed by Covid. “The tech team pulled off something kind of crazy when they set that up,” says Lin. “Within weeks, they set up a system where people could upload files of shows they recorded from home, and then it would be broadcast live.”
“Sticking to the hybrid system means that especially new members have the flexibility to start out recording from home,” adds Rock. “That’s what Maggie and I did. It means if you’re scared, a little jumpy, or stutter as you speak, you can go back and edit.”
The station also expanded its slate of new content in the years following the pandemic. “I think the most lasting effect of Covid is that we are now 24/7,” says Rock. “Most of the time it’s fresh material now. The spring schedule is guaranteed fresh material from 6 a.m. to 2 a.m.”
“It’s a packed schedule,” adds Lin.
Considering the sheer amount of original programming now airing on WMBR, it would be easy to assume the station relies heavily on ad revenue to keep the lights on. But, thanks to one fundraising week held each November, the station keeps pumping out music and spoken-word shows such as “Music for Eels,” “Post-Tentious,” and “Crunchy Plastic Dinosaurs.”
“And operating an FM radio station is not cheap,” says Rock, “maintaining the antennas and buying new tech equipment, getting music, paying licensing fees, and ordering pizza to keep the students on board because the DJs have to be happy, etc. So it’s a real privilege that we are able to operate on that listener funding from that one week each year.”
“It’s kind of crazy, because when you’re broadcasting, it’s to Greater Boston, but you really don’t know how many people are listening,” adds Lin. “And I think it’s really awesome when you see fundraising week. It’s like, ‘Yeah, people really do listen.’” 
“And if a donor chooses to pledge to a show, generally the DJs will mail a postcard back as thanks for that donation. So, if you want a signature of Maggie’s or mine, support us in November!” laughs Rock. “Limiting [fundraising] to one week means that we never advertise, so as long as we keep that contained to one-52nd of the year, the rest of the time you just get the music and the DJ’s commentary you tuned in for. There’s no solicitation.”
In many ways, this highlights the paradox of WMBR: reconciling its undeniable audience of loyal listeners and passionate community members with the fact that many MIT students and employees have never heard of WMBR.
“I think a lot of people just don’t quite know that the radio station is something that exists,” explains Lin. “I understand it’s because people our age don’t really listen to radio much anymore, but I think the space is so amazing. A lot of the new students that we bring in are pretty awed by it, especially the record library; with hundreds of thousands of records and CDs, and the studios,” says Lin, referencing the station’s impressive collection of music, which fills a space so large that it once held a bowling alley. “It’s an opportunity that is kind of easy to miss out on. So I feel like we’re bringing in new members — which I’m really happy about — but I just want people to know that WMBR is here, and it’s really cool.”
“Yes. I second that,” says Rock. “MIT is so full of opportunities and resources that you can’t possibly take advantage of all of them, but we are hidden here in the basement of Walker Memorial where students don’t really make it [to] that often.”
“Listeners don’t even know,” laughs Lin. “We had someone pass by the door once, and they were like, ‘The radio station? It’s here?’”
“I didn’t know there was a campus radio station, and I frankly hadn’t really thought of campus radio until I walked into Activities Midway during my first CPW [Campus Preview Weekend], and maybe orientation,” adds Rock. “One of the great things about it is that you can share your own music tastes with all of greater Boston. You have the aux cord for an hour every week, and it’s such a privilege.”
“It’s kind of scary-sounding to think, ‘You’re going to go sit behind a microphone and all of Greater Boston will hear you,’” adds Lin. “But James is always full of confidence, so I just thought, ‘What if we did a show together?’ That’s another thing that we like as we get new students in: people who want to co-host shows together.” 
“We are always looking for new student members,” says Rock. “Whether you want to do a radio show, podcast, help with maintaining and upgrading our broadcast equipment, or gain valuable experience helping to manage and lead a nonprofit organization that is an eclectic mix of MIT students, staff, and members of the local community, let us know!”
Walker Memorial Basement Radio (WMBR) is currently on the air and streaming 24/7. Listen online here, or tune your dial to 88.1 FM. To find out more about joining WMBR, send a message to [email protected].
0 notes
sunaleisocial · 2 days
Text
MIT conductive concrete consortium cements five-year research agreement with Japanese industry
New Post has been published on https://sunalei.org/news/mit-conductive-concrete-consortium-cements-five-year-research-agreement-with-japanese-industry/
MIT conductive concrete consortium cements five-year research agreement with Japanese industry
The MIT Electron-conductive Cement-based Materials Hub (EC^3 Hub), an outgrowth of the MIT Concrete Sustainability Hub (CSHub), has been established by a five-year sponsored research agreement with the Aizawa Concrete Corp. In particular, the EC^3 Hub will investigate the infrastructure applications of multifunctional concrete — concrete having capacities beyond serving as a structural element, such as functioning as a “battery” for renewable energy. 
Enabled by the MIT Industrial Liaison Program, the newly formed EC^3 Hub represents a large industry-academia collaboration between the MIT CSHub, researchers across MIT, and a Japanese industry consortium led by Aizawa Concrete, a leader in the more sustainable development of concrete structures, which is funding the effort.  
Under this agreement, the EC^3 Hub will focus on two key areas of research: developing self-heating pavement systems and energy storage solutions for sustainable infrastructure systems. “It is an honor for Aizawa Concrete to be associated with the scaling up of this transformational technology from MIT labs to the industrial scale,” says Aizawa Concrete CEO Yoshihiro Aizawa. “This is a project we believe will have a fundamental impact not only on the decarbonization of the industry, but on our societies at large.” 
By running current through carbon black-doped concrete pavements, the EC^3 Hub’s technology could allow cities and municipalities to de-ice road and sidewalk surfaces at scale, improving safety for drivers and pedestrians in icy conditions. The potential for concrete to store energy from renewable sources — a topic widely covered by news outlets — could allow concrete to serve as a “battery” for technologies such as solar, wind, and tidal power generation, which cannot produce a consistent amount of energy (for example, when a cloudy day inhibits a solar panel’s output). Due to the scarcity of the ingredients used in many batteries, such as lithium-ion cells, this technology offers an alternative for renewable energy storage at scale. 
Carbon black doped concrete pavements can have current run through them to heat their surfaces, allowing for de-icing. If implemented for city roads and sidewalks, this technology could have benefits for pedestrian and vehicular safety.
Photo courtesy of the MIT EC^3 Hub and Aizawa Concrete.
Previous item Next item
Professor Admir Masic, EC^3 Hub’s founding faculty director, demonstrates the self-heating capability of carbon black doped concrete pavements with a laser thermometer, showing the difference between the pavement surface temperature and the ambient temperature.
Photo courtesy of the MIT EC^3 Hub and Aizawa Concrete.
Previous item Next item
A charged carbon-cement supercapacitor powers multiple LED lights and is connected to a multimeter to measure the system’s voltage at 12 volts.
Photo courtesy of the MIT EC^3 Hub and Aizawa Concrete.
Previous item Next item
Regarding the collaborative research agreement, the EC^3 Hub’s founding faculty director, Professor Admir Masic, notes that “this is the type of investment in our new conductive cement-based materials technology which will propel it from our lab bench onto the infrastructure market.” Masic is also an associate professor in the MIT Department of Civil and Environmental Engineering, as well as a principal investigator within the MIT CSHub, among other appointments.
For the April 11 signing of the agreement, Masic was joined in Fukushima, Japan, by MIT colleagues Franz-Josef Ulm, a professor of Civil and Environmental Engineering and faculty director of the MIT CSHub; Yang Shao-Horn, the JR East Professor of Engineering, professor of mechanical engineering, and professor of materials science and engineering; and Jewan Bae, director of MIT Corporate Relations. Ulm and Masic will co-direct the EC^3 Hub.
The EC^3 Hub envisions a close collaboration between MIT engineers and scientists as well as the Aizawa-led Japanese industry consortium for the development of breakthrough innovations for multifunctional infrastructure systems. In addition to higher-strength materials, these systems may be implemented for a variety of novel functions such as roads capable of charging electric vehicles as they drive along them.
Members of the EC^3 Hub will engage with the active stakeholder community within the MIT CSHub to accelerate the industry’s transition to carbon neutrality. The EC^3 Hub will also open opportunities for the MIT community to engage with the large infrastructure industry sector for decarbonization through innovation. 
0 notes
sunaleisocial · 3 days
Text
Physicists arrange atoms in extremely close proximity
New Post has been published on https://sunalei.org/news/physicists-arrange-atoms-in-extremely-close-proximity/
Physicists arrange atoms in extremely close proximity
Tumblr media
Proximity is key for many quantum phenomena, as interactions between atoms are stronger when the particles are close. In many quantum simulators, scientists arrange atoms as close together as possible to explore exotic states of matter and build new quantum materials.
They typically do this by cooling the atoms to a stand-still, then using laser light to position the particles as close as 500 nanometers apart — a limit that is set by the wavelength of light. Now, MIT physicists have developed a technique that allows them to arrange atoms in much closer proximity, down to a mere 50 nanometers. For context, a red blood cell is about 1,000 nanometers wide.
The physicists demonstrated the new approach in experiments with dysprosium, which is the most magnetic atom in nature. They used the new approach to manipulate two layers of dysprosium atoms, and positioned the layers precisely 50 nanometers apart. At this extreme proximity, the magnetic interactions were 1,000 times stronger than if the layers were separated by 500 nanometers.
What’s more, the scientists were able to measure two new effects caused by the atoms’ proximity. Their enhanced magnetic forces caused “thermalization,” or the transfer of heat from one layer to another, as well as synchronized oscillations between layers. These effects petered out as the layers were spaced farther apart.
“We have gone from positioning atoms from 500 nanometers to 50 nanometers apart, and there is a lot you can do with this,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT. “At 50 nanometers, the behavior of atoms is so much different that we’re really entering a new regime here.”
Ketterle and his colleagues say the new approach can be applied to many other atoms to study quantum phenomena. For their part, the group plans to use the technique to manipulate atoms into configurations that could generate the first purely magnetic quantum gate — a key building block for a new type of quantum computer.
The team has published their results today in the journal Science. The study’s co-authors include lead author and physics graduate student Li Du, along with Pierre Barral, Michael Cantara, Julius de Hond, and Yu-Kun Lu — all members of the MIT-Harvard Center for Ultracold Atoms, the Department of Physics, and the Research Laboratory of Electronics at MIT.
Peaks and valleys
To manipulate and arrange atoms, physicists typically first cool a cloud of atoms to temperatures approaching absolute zero, then use a system of laser beams to corral the atoms into an optical trap.
Laser light is an electromagnetic wave with a specific wavelength (the distance between maxima of the electric field) and frequency. The wavelength limits the smallest pattern into which light can be shaped to typically 500 nanometers, the so-called optical resolution limit. Since atoms are attracted by laser light of certain frequencies, atoms will be positioned at the points of peak laser intensity. For this reason, existing techniques have been limited in how close they can position atomic particles, and could not be used to explore phenomena that happen at much shorter distances.
“Conventional techniques stop at 500 nanometers, limited not by the atoms but by the wavelength of light,” Ketterle explains. “We have found now a new trick with light where we can break through that limit.”
The team’s new approach, like current techniques, starts by cooling a cloud of atoms — in this case, to about 1 microkelvin, just a hair above absolute zero — at which point, the atoms come to a near-standstill. Physicists can then use lasers to move the frozen particles into desired configurations.
Then, Du and his collaborators worked with two laser beams, each with a different frequency, or color, and circular polarization, or direction of the laser’s electric field. When the two beams travel through a super-cooled cloud of atoms, the atoms can orient their spin in opposite directions, following either of the two lasers’ polarization. The result is that the beams produce two groups of the same atoms, only with opposite spins.
Each laser beam formed a standing wave, a periodic pattern of electric field intensity with a spatial period of 500 nanometers. Due to their different polarizations, each standing wave attracted and corralled one of two groups of atoms, depending on their spin. The lasers could be overlaid and tuned such that the distance between their respective peaks is as small as 50 nanometers, meaning that the atoms gravitating to each respective laser’s peaks would be separated by the same 50 nanometers.
But in order for this to happen, the lasers would have to be extremely stable and immune to all external noise, such as from shaking or even breathing on the experiment. The team realized they could stabilize both lasers by directing them through an optical fiber, which served to lock the light beams in place in relation to each other.
“The idea of sending both beams through the optical fiber meant the whole machine could shake violently, but the two laser beams stayed absolutely stable with respect to each others,” Du says.
Magnetic forces at close range
As a first test of their new technique, the team used atoms of dysprosium — a rare-earth metal that is one of the strongest magnetic elements in the periodic table, particularly at ultracold temperatures. However, at the scale of atoms, the element’s magnetic interactions are relatively weak at distances of even 500 nanometers. As with common refrigerator magnets, the magnetic attraction between atoms increases with proximity, and the scientists suspected that if their new technique could space dysprosium atoms as close as 50 nanometers apart, they might observe the emergence of otherwise weak interactions between the magnetic atoms.
“We could suddenly have magnetic interactions, which used to be almost neglible but now are really strong,” Ketterle says.
The team applied their technique to dysprosium, first super-cooling the atoms, then passing two lasers through to split the atoms into two spin groups, or layers. They then directed the lasers through an optical fiber to stabilize them, and found that indeed, the two layers of dysprosium atoms gravitated to their respective laser peaks, which in effect separated the layers of atoms by 50 nanometers — the closest distance that any ultracold atom experiment has been able to achieve.
At this extremely close proximity, the atoms’ natural magnetic interactions were significantly enhanced, and were 1,000 times stronger than if they were positioned 500 nanometers apart. The team observed that these interactions resulted in two novel quantum phenomena: collective oscillation, in which one layer’s vibrations caused the other layer to vibrate in sync; and thermalization, in which one layer transferred heat to the other, purely through magnetic fluctuations in the atoms.
“Until now, heat between atoms could only by exchanged when they were in the same physical space and could collide,” Du notes. “Now we have seen atomic layers, separated by vacuum, and they exchange heat via fluctuating magnetic fields.”
The team’s results introduce a new technique that can be used to position many types of atom in close proximity. They also show that atoms, placed close enough together, can exhibit interesting quantum phenomena, that could be harnessed to build new quantum materials, and potentially, magnetically-driven atomic systems for quantum computers.
“We are really bringing super-resolution methods to the field, and it will become a general tool for doing quantum simulations,” Ketterle says. “There are many variants possible, which we are working on.”
This research was funded, in part, by the National Science Foundation and the Department of Defense.
0 notes
sunaleisocial · 4 days
Text
Epigenomic analysis sheds light on risk factors for ALS
New Post has been published on https://sunalei.org/news/epigenomic-analysis-sheds-light-on-risk-factors-for-als/
Epigenomic analysis sheds light on risk factors for ALS
Tumblr media
For most patients, it’s unknown exactly what causes amyotrophic lateral sclerosis (ALS), a disease characterized by degeneration of motor neurons that impairs muscle control and eventually leads to death.
Studies have identified certain genes that confer a higher risk of the disease, but scientists believe there are many more genetic risk factors that have yet to be discovered. One reason why these drivers have been hard to find is that some are found in very few patients, making it hard to pick them out without a very large sample of patients. Additionally, some of the risk may be driven by epigenomic factors, rather than mutations in protein-coding genes.
Working with the Answer ALS consortium, a team of MIT researchers has analyzed epigenetic modifications — tags that determine which genes are turned on in a cell — in motor neurons derived from induced pluripotent stem (IPS) cells from 380 ALS patients.
This analysis revealed a strong differential signal associated with a known subtype of ALS, and about 30 locations with modifications that appear to be linked to rates of disease progression in ALS patients. The findings may help scientists develop new treatments that are targeted to patients with certain genetic risk factors.
“If the root causes are different for all these different versions of the disease, the drugs will be very different and the signals in IPS cells will be very different,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering and the senior author of the study. “We may get to a point in a decade or so where we don’t even think of ALS as one disease, where there are drugs that are treating specific types of ALS that only work for one group of patients and not for another.”
MIT postdoc Stanislav Tsitkov is the lead author of the paper, which appears today in Nature Communications.
Finding risk factors
ALS is a rare disease that is estimated to affect about 30,000 people in the United States. One of the challenges in studying the disease is that while genetic variants are believed to account for about 50 percent of ALS risk (with environmental factors making up the rest), most of the variants that contribute to that risk have not been identified.
Similar to Alzheimer’s disease, there may be a large number of genetic variants that can confer risk, but each individual patient may carry only a small number of those. This makes it difficult to identify the risk factors unless scientists have a very large population of patients to analyze.
“Because we expect the disease to be heterogeneous, you need to have large numbers of patients before you can pick up on signals like this. To really be able to classify the subtypes of disease, we’re going to need to look at a lot of people,” Fraenkel says.
About 10 years ago, the Answer ALS consortium began to collect large numbers of patient samples, which could allow for larger-scale studies that might reveal some of the genetic drivers of the disease. From blood samples, researchers can create induced pluripotent stem cells and then induce them to differentiate into motor neurons, the cells most affected by ALS.
“We don’t think all ALS patients are going to be the same, just like all cancers are not the same. And the goal is being able to find drivers of the disease that could be therapeutic targets,” Fraenkel says.
In this study, Fraenkel and his colleagues wanted to see if patient-derived cells could offer any information about molecular differences that are relevant to ALS. They focused on epigenomic modifications, using a method called ATAC-seq to measure chromatin density across the genome of each cell. Chromatin is a complex of DNA and proteins that determines which genes are accessible to be transcribed by the cell, depending on how densely packed the chromatin is.
In data that were collected and analyzed over several years, the researchers did not find any global signal that clearly differentiated the 380 ALS patients in their study from 80 healthy control subjects. However, they did find a strong differential signal associated with a subtype of ALS, characterized by a genetic mutation in the C9orf72 gene.
Additionally, they identified about 30 regions that were associated with slower rates of disease progression in ALS patients. Many of these regions are located near genes related to the cellular inflammatory response; interestingly, several of the identified genes have also been implicated in other neurodegenerative diseases, such as Parkinson’s disease.
“You can use a small number of these epigenomic regions and look at the intensity of the signal there, and predict how quickly someone’s disease will progress. That really validates the hypothesis that the epigenomics can be used as a filter to better understand the contribution of the person’s genome,” Fraenkel says.
“By harnessing the very large number of participant samples and extensive data collected by the Answer ALS Consortium, these studies were able to rigorously test whether the observed changes might be artifacts related to the techniques of sample collection, storage, processing, and analysis, or truly reflective of important biology,” says Lyle Ostrow, an associate professor of neurology at the Lewis Katz School of Medicine at Temple University, who was not involved in the study. “They developed standard ways to control for these variables, to make sure the results can be accurately compared. Such studies are incredibly important for accelerating ALS therapy development, as they will enable data and samples collected from different studies to be analyzed together.”
Targeted drugs
The researchers now hope to further investigate these genomic regions and see how they might drive different aspects of ALS progression in different subsets of patients. This could help scientists develop drugs that might work in different groups of patients, and help them identify which patients should be chosen for clinical trials of those drugs, based on genetic or epigenetic markers.
Last year, the U.S. Food and Drug Administration approved a drug called tofersen, which can be used in ALS patients with a mutation in a gene called SOD1. This drug is very effective for those patients, who make up about 1 percent of the total population of people with ALS. Fraenkel’s hope is that more drugs can be developed for, and tested in, people with other genetic drivers of ALS.
“If you had a drug like tofersen that works for 1 percent of patients and you just gave it to a typical phase two clinical trial, you probably wouldn’t have anybody with that mutation in the trial, and it would’ve failed. And so that drug, which is a lifesaver for people, would never have gotten through,” Fraenkel says.
The MIT team is now using an approach called quantitative trait locus (QTL) analysis to try to identify subgroups of ALS patients whose disease is driven by specific genomic variants.
“We can integrate the genomics, the transcriptomics, and the epigenomics, as a way to find subgroups of ALS patients who have distinct phenotypic signatures from other ALS patients and healthy controls,” Tsitkov says. “We have already found a few potential hits in that direction.”
The research was funded by the Answer ALS program, which is supported by the Robert Packard Center for ALS Research at Johns Hopkins University, Travelers Insurance, ALS Finding a Cure Foundation, Stay Strong Vs. ALS, Answer ALS Foundation, Microsoft, Caterpillar Foundation, American Airlines, Team Gleason, the U.S. National Institutes of Health, Fishman Family Foundation, Aviators Against ALS, AbbVie Foundation, Chan Zuckerberg Initiative, ALS Association, National Football League, F. Prime, M. Armstrong, Bruce Edwards Foundation, the Judith and Jean Pape Adams Charitable Foundation, Muscular Dystrophy Association, Les Turner ALS Foundation, PGA Tour, Gates Ventures, and Bari Lipp Foundation. This work was also supported, in part, by grants from the National Institutes of Health and the MIT-GSK Gertrude B. Elion Research Fellowship Program for Drug Discovery and Disease.
0 notes
sunaleisocial · 4 days
Text
Natural language boosts LLM performance in coding, planning, and robotics
New Post has been published on https://sunalei.org/news/natural-language-boosts-llm-performance-in-coding-planning-and-robotics/
Natural language boosts LLM performance in coding, planning, and robotics
Tumblr media
Large language models (LLMs) are becoming increasingly useful for programming and robotics tasks, but for more complicated reasoning problems, the gap between these systems and humans looms large. Without the ability to learn new concepts like humans do, these systems fail to form good abstractions — essentially, high-level representations of complex concepts that skip less-important details — and thus sputter when asked to do more sophisticated tasks.
Luckily, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have found a treasure trove of abstractions within natural language. In three papers to be presented at the International Conference on Learning Representations this month, the group shows how our everyday words are a rich source of context for language models, helping them build better overarching representations for code synthesis, AI planning, and robotic navigation and manipulation.
The three separate frameworks build libraries of abstractions for their given task: LILO (library induction from language observations) can synthesize, compress, and document code; Ada (action domain acquisition) explores sequential decision-making for artificial intelligence agents; and LGA (language-guided abstraction) helps robots better understand their environments to develop more feasible plans. Each system is a neurosymbolic method, a type of AI that blends human-like neural networks and program-like logical components.
LILO: A neurosymbolic framework that codes
Large language models can be used to quickly write solutions to small-scale coding tasks, but cannot yet architect entire software libraries like the ones written by human software engineers. To take their software development capabilities further, AI models need to refactor (cut down and combine) code into libraries of succinct, readable, and reusable programs.
Refactoring tools like the previously developed MIT-led Stitch algorithm can automatically identify abstractions, so, in a nod to the Disney movie “Lilo & Stitch,” CSAIL researchers combined these algorithmic refactoring approaches with LLMs. Their neurosymbolic method LILO uses a standard LLM to write code, then pairs it with Stitch to find abstractions that are comprehensively documented in a library.
LILO’s unique emphasis on natural language allows the system to do tasks that require human-like commonsense knowledge, such as identifying and removing all vowels from a string of code and drawing a snowflake. In both cases, the CSAIL system outperformed standalone LLMs, as well as a previous library learning algorithm from MIT called DreamCoder, indicating its ability to build a deeper understanding of the words within prompts. These encouraging results point to how LILO could assist with things like writing programs to manipulate documents like Excel spreadsheets, helping AI answer questions about visuals, and drawing 2D graphics.
“Language models prefer to work with functions that are named in natural language,” says Gabe Grand SM ’23, an MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author on the research. “Our work creates more straightforward abstractions for language models and assigns natural language names and documentation to each one, leading to more interpretable code for programmers and improved system performance.”
When prompted on a programming task, LILO first uses an LLM to quickly propose solutions based on data it was trained on, and then the system slowly searches more exhaustively for outside solutions. Next, Stitch efficiently identifies common structures within the code and pulls out useful abstractions. These are then automatically named and documented by LILO, resulting in simplified programs that can be used by the system to solve more complex tasks.
The MIT framework writes programs in domain-specific programming languages, like Logo, a language developed at MIT in the 1970s to teach children about programming. Scaling up automated refactoring algorithms to handle more general programming languages like Python will be a focus for future research. Still, their work represents a step forward for how language models can facilitate increasingly elaborate coding activities.
Ada: Natural language guides AI task planning
Just like in programming, AI models that automate multi-step tasks in households and command-based video games lack abstractions. Imagine you’re cooking breakfast and ask your roommate to bring a hot egg to the table — they’ll intuitively abstract their background knowledge about cooking in your kitchen into a sequence of actions. In contrast, an LLM trained on similar information will still struggle to reason about what they need to build a flexible plan.
Named after the famed mathematician Ada Lovelace, who many consider the world’s first programmer, the CSAIL-led “Ada” framework makes headway on this issue by developing libraries of useful plans for virtual kitchen chores and gaming. The method trains on potential tasks and their natural language descriptions, then a language model proposes action abstractions from this dataset. A human operator scores and filters the best plans into a library, so that the best possible actions can be implemented into hierarchical plans for different tasks.
“Traditionally, large language models have struggled with more complex tasks because of problems like reasoning about abstractions,” says Ada lead researcher Lio Wong, an MIT graduate student in brain and cognitive sciences, CSAIL affiliate, and LILO coauthor. “But we can combine the tools that software engineers and roboticists use with LLMs to solve hard problems, such as decision-making in virtual environments.”
When the researchers incorporated the widely-used large language model GPT-4 into Ada, the system completed more tasks in a kitchen simulator and Mini Minecraft than the AI decision-making baseline “Code as Policies.” Ada used the background information hidden within natural language to understand how to place chilled wine in a cabinet and craft a bed. The results indicated a staggering 59 and 89 percent task accuracy improvement, respectively.
With this success, the researchers hope to generalize their work to real-world homes, with the hopes that Ada could assist with other household tasks and aid multiple robots in a kitchen. For now, its key limitation is that it uses a generic LLM, so the CSAIL team wants to apply a more powerful, fine-tuned language model that could assist with more extensive planning. Wong and her colleagues are also considering combining Ada with a robotic manipulation framework fresh out of CSAIL: LGA (language-guided abstraction).
Language-guided abstraction: Representations for robotic tasks
Andi Peng SM ’23, an MIT graduate student in electrical engineering and computer science and CSAIL affiliate, and her coauthors designed a method to help machines interpret their surroundings more like humans, cutting out unnecessary details in a complex environment like a factory or kitchen. Just like LILO and Ada, LGA has a novel focus on how natural language leads us to those better abstractions.
In these more unstructured environments, a robot will need some common sense about what it’s tasked with, even with basic training beforehand. Ask a robot to hand you a bowl, for instance, and the machine will need a general understanding of which features are important within its surroundings. From there, it can reason about how to give you the item you want. 
In LGA’s case, humans first provide a pre-trained language model with a general task description using natural language, like “bring me my hat.” Then, the model translates this information into abstractions about the essential elements needed to perform this task. Finally, an imitation policy trained on a few demonstrations can implement these abstractions to guide a robot to grab the desired item.
Previous work required a person to take extensive notes on different manipulation tasks to pre-train a robot, which can be expensive. Remarkably, LGA guides language models to produce abstractions similar to those of a human annotator, but in less time. To illustrate this, LGA developed robotic policies to help Boston Dynamics’ Spot quadruped pick up fruits and throw drinks in a recycling bin. These experiments show how the MIT-developed method can scan the world and develop effective plans in unstructured environments, potentially guiding autonomous vehicles on the road and robots working in factories and kitchens.
“In robotics, a truth we often disregard is how much we need to refine our data to make a robot useful in the real world,” says Peng. “Beyond simply memorizing what’s in an image for training robots to perform tasks, we wanted to leverage computer vision and captioning models in conjunction with language. By producing text captions from what a robot sees, we show that language models can essentially build important world knowledge for a robot.”
The challenge for LGA is that some behaviors can’t be explained in language, making certain tasks underspecified. To expand how they represent features in an environment, Peng and her colleagues are considering incorporating multimodal visualization interfaces into their work. In the meantime, LGA provides a way for robots to gain a better feel for their surroundings when giving humans a helping hand. 
An “exciting frontier” in AI
“Library learning represents one of the most exciting frontiers in artificial intelligence, offering a path towards discovering and reasoning over compositional abstractions,” says assistant professor at the University of Wisconsin-Madison Robert Hawkins, who was not involved with the papers. Hawkins notes that previous techniques exploring this subject have been “too computationally expensive to use at scale” and have an issue with the lambdas, or keywords used to describe new functions in many languages, that they generate. “They tend to produce opaque ‘lambda salads,’ big piles of hard-to-interpret functions. These recent papers demonstrate a compelling way forward by placing large language models in an interactive loop with symbolic search, compression, and planning algorithms. This work enables the rapid acquisition of more interpretable and adaptive libraries for the task at hand.”
By building libraries of high-quality code abstractions using natural language, the three neurosymbolic methods make it easier for language models to tackle more elaborate problems and environments in the future. This deeper understanding of the precise keywords within a prompt presents a path forward in developing more human-like AI models.
MIT CSAIL members are senior authors for each paper: Joshua Tenenbaum, a professor of brain and cognitive sciences, for both LILO and Ada; Julie Shah, head of the Department of Aeronautics and Astronautics, for LGA; and Jacob Andreas, associate professor of electrical engineering and computer science, for all three. The additional MIT authors are all PhD students: Maddy Bowers and Theo X. Olausson for LILO, Jiayuan Mao and Pratyusha Sharma for Ada, and Belinda Z. Li for LGA. Muxin Liu of Harvey Mudd College was a coauthor on LILO; Zachary Siegel of Princeton University, Jaihai Feng of the University of California at Berkeley, and Noa Korneev of Microsoft were coauthors on Ada; and Ilia Sucholutsky, Theodore R. Sumers, and Thomas L. Griffiths of Princeton were coauthors on LGA. 
LILO and Ada were supported, in part, by ​​MIT Quest for Intelligence, the MIT-IBM Watson AI Lab, Intel, U.S. Air Force Office of Scientific Research, the U.S. Defense Advanced Research Projects Agency, and the U.S. Office of Naval Research, with the latter project also receiving funding from the Center for Brains, Minds and Machines. LGA received funding from the U.S. National Science Foundation, Open Philanthropy, the Natural Sciences and Engineering Research Council of Canada, and the U.S. Department of Defense.
0 notes
sunaleisocial · 4 days
Text
Fostering research, careers, and community in materials science
New Post has been published on https://sunalei.org/news/fostering-research-careers-and-community-in-materials-science/
Fostering research, careers, and community in materials science
Tumblr media
Gabrielle Wood, a junior at Howard University majoring in chemical engineering, is on a mission to improve the sustainability and life cycles of natural resources and materials. Her work in the Materials Initiative for Comprehensive Research Opportunity (MICRO) program has given her hands-on experience with many different aspects of research, including MATLAB programming, experimental design, data analysis, figure-making, and scientific writing.
Wood is also one of 10 undergraduates from 10 universities around the United States to participate in the first MICRO Summit earlier this year. The internship program, developed by the MIT Department of Materials Science and Engineering (DMSE), first launched in fall 2021. Now in its third year, the program continues to grow, providing even more opportunities for non-MIT undergraduate students — including the MICRO Summit and the program’s expansion to include Northwestern University.
“I think one of the most valuable aspects of the MICRO program is the ability to do research long term with an experienced professor in materials science and engineering,” says Wood. “My school has limited opportunities for undergraduate research in sustainable polymers, so the MICRO program allowed me to gain valuable experience in this field, which I would not otherwise have.”
Like Wood, Griheydi Garcia, a senior chemistry major at Manhattan College, values the exposure to materials science, especially since she is not able to learn as much about it at her home institution.
“I learned a lot about crystallography and defects in materials through the MICRO curriculum, especially through videos,” says Garcia. “The research itself is very valuable, as well, because we get to apply what we’ve learned through the videos in the research we do remotely.”
Expanding research opportunities
From the beginning, the MICRO program was designed as a fully remote, rigorous education and mentoring program targeted toward students from underserved backgrounds interested in pursuing graduate school in materials science or related fields. Interns are matched with faculty to work on their specific research interests.
Jessica Sandland ’99, PhD ’05, principal lecturer in DMSE and co-founder of MICRO, says that research projects for the interns are designed to be work that they can do remotely, such as developing a machine-learning algorithm or a data analysis approach.
“It’s important to note that it’s not just about what the program and faculty are bringing to the student interns,” says Sandland, a member of the MIT Digital Learning Lab, a joint program between MIT Open Learning and the Institute’s academic departments. “The students are doing real research and work, and creating things of real value. It’s very much an exchange.”
Cécile Chazot PhD ’22, now an assistant professor of materials science and engineering at Northwestern University, had helped to establish MICRO at MIT from the very beginning. Once at Northwestern, she quickly realized that expanding MICRO to Northwestern would offer even more research opportunities to interns than by relying on MIT alone — leveraging the university’s strong materials science and engineering department, as well as offering resources for biomaterials research through Northwestern’s medical school. The program received funding from 3M and officially launched at Northwestern in fall 2023. Approximately half of the MICRO interns are now in the program with MIT and half are with Northwestern. Wood and Garcia both participate in the program via Northwestern.
“By expanding to another school, we’ve been able to have interns work with a much broader range of research projects,” says Chazot. “It has become easier for us to place students with faculty and research that match their interests.”
Building community
The MICRO program received a Higher Education Innovation grant from the Abdul Latif Jameel World Education Lab, part of MIT Open Learning, to develop an in-person summit. In January 2024, interns visited MIT for three days of presentations, workshops, and campus tours — including a tour of the MIT.nano building — as well as various community-building activities.
“A big part of MICRO is the community,” says Chazot. “A highlight of the summit was just seeing the students come together.”
The summit also included panel discussions that allowed interns to gain insights and advice from graduate students and professionals. The graduate panel discussion included MIT graduate students Sam Figueroa (mechanical engineering), Isabella Caruso (DMSE), and Eliana Feygin (DMSE). The career panel was led by Chazot and included Jatin Patil PhD ’23, head of product at SiTration; Maureen Reitman ’90, ScD ’93, group vice president and principal engineer at Exponent; Lucas Caretta PhD ’19, assistant professor of engineering at Brown University; Raquel D’Oyen ’90, who holds a PhD from Northwestern University and is a senior engineer at Raytheon; and Ashley Kaiser MS ’19, PhD ’21, senior process engineer at 6K.
Students also had an opportunity to share their work with each other through research presentations. Their presentations covered a wide range of topics, including: developing a computer program to calculate solubility parameters for polymers used in textile manufacturing; performing a life-cycle analysis of a photonic chip and evaluating its environmental impact in comparison to a standard silicon microchip; and applying machine learning algorithms to scanning transmission electron microscopy images of CrSBr, a two-dimensional magnetic material. 
“The summit was wonderful and the best academic experience I have had as a first-year college student,” says MICRO intern Gabriella La Cour, who is pursuing a major in chemistry and dual degree biomedical engineering at Spelman College and participates in MICRO through MIT. “I got to meet so many students who were all in grades above me … and I learned a little about how to navigate college as an upperclassman.” 
“I actually have an extremely close friendship with one of the students, and we keep in touch regularly,” adds La Cour. “Professor Chazot gave valuable advice about applications and recommendation letters that will be useful when I apply to REUs [Research Experiences for Undergraduates] and graduate schools.”
Looking to the future, MICRO organizers hope to continue to grow the program’s reach.
“We would love to see other schools taking on this model,” says Sandland. “There are a lot of opportunities out there. The more departments, research groups, and mentors that get involved with this program, the more impact it can have.”
0 notes
sunaleisocial · 4 days
Text
NASA Selects Commercial Service Studies to Enable Mars Robotic Science - NASA
New Post has been published on https://sunalei.org/news/nasa-selects-commercial-service-studies-to-enable-mars-robotic-science-nasa/
NASA Selects Commercial Service Studies to Enable Mars Robotic Science - NASA
Tumblr media
Nine companies have been selected to conduct early-stage studies of concepts for commercial services to support lower-cost, higher-frequency missions to the Red Planet.
NASA has identified nine U.S. companies to perform a total of 12 concept studies of how commercial services can be applied to enable science missions to Mars. Each awardee will receive between $200,000 and $300,000 to produce a detailed report on potential services — including payload delivery, communications relay, surface imaging, and payload hosting — that could support future missions to the Red Planet.
The companies were selected from among those that responded to a Jan. 29 request for proposals from U.S. industry.
NASA’s Mars Exploration Program initiated the request for proposals to help establish a new paradigm for missions to Mars with the potential to advance high-priority science objectives. Many of the selected proposals center on adapting existing projects currently focused on the Moon and Earth to Mars-based applications.
They include “space tugs” to carry other spacecraft to Mars, spacecraft to host science instruments and cameras, and telecommunications relays. The concepts being sought are intended to support a broad strategy of partnerships between government, industry, and international partners to enable frequent, lower-cost missions to Mars over the next 20 years.
“We’re in an exciting new era of space exploration, with rapid growth of commercial interest and capabilities,” said Eric Ianson, director of NASA’s Mars Exploration Program. “Now is the right time for NASA to begin looking at how public-private partnerships could support science at Mars in the coming decades.”
The selected Mars Exploration Commercial Services studies are divided into four categories:
Small payload delivery and hosting services
Lockheed Martin Corporation, Littleton, Colorado — adapt a lunar-exploration spacecraft
Impulse Space, Inc., Redondo Beach, California — adapt an Earth-vicinity orbital transfer vehicle (space tug)
Firefly Aerospace, Cedar Park, Texas — adapt a lunar-exploration spacecraft
Large payload delivery and hosting services
United Launch Services (ULA), LLC, Centennial, Colorado — modify an Earth-vicinity cryogenic upper stage
Blue Origin, LLC, Kent, Washington — adapt an Earth- and lunar-vicinity spacecraft
Astrobotic Technology, Inc., Pittsburgh — modify a lunar-exploration spacecraft
Mars surface-imaging services
Albedo Space Corporation, Broomfield, Colorado — adapt a low Earth orbit imaging satellite
Redwire Space, Inc., Littleton, Colorado — modify a low Earth orbit commercial imaging spacecraft
Astrobotic Technology, Inc. — modify a lunar exploration spacecraft to include imaging
Next-generation relay services
Space Exploration Technologies Corporation (SpaceX), Hawthorne, California — adapt Earth-orbit communication satellites for Mars
Lockheed Martin Corporation — provide communication relay services via a modified Mars orbiter
Blue Origin, LLC — provide communication relay services via an adapted Earth- and lunar-vicinity spacecraft
The 12-week studies are planned to conclude in August, and a study summary will be released later in the year. These studies could potentially lead to future requests for proposals but do not constitute a NASA commitment.
NASA is concurrently requesting separate industry proposals for its Mars Sample Return campaign, which seeks to bring samples being collected by the agency’s Perseverance rover to Earth, where they can be studied by laboratory equipment too large and complex to bring to Mars. The MSR industry studies are completely independent of the MEP commercial studies.
NASA’s Jet Propulsion Laboratory in Southern California manages the Mars Exploration Program on behalf of NASA’s Science Mission Directorate in Washington. The goal of the program is to provide a continuous flow of scientific information and discovery through a carefully selected series of robotic orbiters, landers, and mobile laboratories interconnected by a high-bandwidth Mars-Earth communications network. Scientific data and associated information for all Mars Exploration Program missions are archived in the NASA Planetary Data System.
Caltech in Pasadena, California, manages JPL for NASA.
News Media Contacts
Andrew Good Jet Propulsion Laboratory, Pasadena, Calif. 818-393-2433 [email protected]
Karen Fox / Charles Blue NASA Headquarters, Washington 301-286-6284 / 202-802-5345 [email protected] / [email protected]
2024-057
0 notes
sunaleisocial · 4 days
Text
By Their Powers Combined - NASA
New Post has been published on https://sunalei.org/news/by-their-powers-combined-nasa/
By Their Powers Combined - NASA
Tumblr media
This April 20, 2024, image shows a first: all six radio frequency antennas at the Madrid Deep Space Communication Complex, part of NASA’s Deep Space Network (DSN), carried out a test to receive data from the agency’s Voyager 1 spacecraft at the same time.
Combining the antennas’ receiving power, or arraying, lets the DSN collect the very faint signals from faraway spacecraft. Voyager 1 is over 15 billion miles (24 billion kilometers) away, so its signal on Earth is far fainter than any other spacecraft with which the DSN communicates. It currently takes Voyager 1’s signal over 22 ½ hours to travel from the spacecraft to Earth. To better receive Voyager 1’s radio communications, a large antenna – or an array of multiple smaller antennas – can be used. A five-antenna array is currently needed to downlink science data from the spacecraft’s Plasma Wave System (PWS) instrument. As Voyager gets further way, six antennas will be needed.
Image Credit: MDSCC/INTA, Francisco “Paco” Moreno
0 notes
sunaleisocial · 4 days
Text
Big Science Drives Wallops’ Upgrades for NASA Suborbital Missions - NASA
New Post has been published on https://sunalei.org/news/big-science-drives-wallops-upgrades-for-nasa-suborbital-missions-nasa/
Big Science Drives Wallops’ Upgrades for NASA Suborbital Missions - NASA
Tumblr media
Large amounts of data collected by today’s sensitive science instruments present a data-handling challenge to small rocket and balloon suborbital mission computing and avionics systems.
Large amounts of data collected by today’s sensitive science instruments present a data-handling challenge to small rocket and balloon mission computing systems.
“Just generally, science payloads are getting larger and more complex,” said astrophysicist Alan Kogut, of NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “You’re always pushing the limit of what can be done, and getting their data back quickly is clearly a high priority for the balloon science community.”
Suborbital science platforms provide low-cost, quick-turnaround test opportunities to study Earth, our solar system, and the universe. Engineers at NASA’s Wallops Flight Facility in Virginia are developing new, higher-capacity systems to process, store, and transmit that data using the IRAD Internal Research and Development Program.
One high-data effort, Kogut said, requires new types of sensors to capture faint patterns within the cosmic microwave background: the oldest light in the cosmos, which was produced 380,000 years after the big bang, when the universe had cooled enough to form the first atoms.
Capturing the polarization — the orientation of this light relative to its path of travel — should show patterns from the original quantum state of the universe, he explained. If seen, these patterns could point the way to a quantum theory of gravity: something beyond Einstein’s general theory of relativity.
“Observing this polarization takes a lot of data,” Kogut said. “The results are limited by noise in any individual detector, so scientists are looking to fly as many as 10,000 detectors on a balloon to minimize that noise.”
While a high-altitude balloon floating high above the clouds is an ideal place for missions to stare into space without disturbances from Earth’s atmosphere, it’s also a good place to be hit by cosmic rays that our atmosphere filters out, he explained. These high-energy particles spatter throughout the balloon payload’s solid structures, producing unwanted signals — noise — in the detectors.
Faster, Lighter, Less Expensive
The CASBa, Comprehensive Avionic System for Balloons, aims to replace a system originally developed in the 1980s, said Sarah Wright, suborbital technology lead at NASA Wallops. CASBa will capture, process, and transmit gigabytes rather than the megabytes capacity of the current system. Building it around commercially supplied computer cores also keeps mission costs down while reducing mass, Wright added.
“That is the essence of sounding rocket and balloon science,” she said. “If it’s relatively inexpensive and off-the-shelf, scientists could put more resources into developing the science package.”
CASBa will provide a variety of options and configurations for different mission needs, she said and will work with the core Flight System operating software developed at NASA Goddard.
Once proven on a balloon flight this summer, a sounding rocket version will be tested in 2025. Additional IRAD projects seek to develop more efficient power-switching electronics and higher-data-rate transmission capabilities which, taken together, complete the computing and download capacity overhaul.
Engineer Ted Daisey leads the IRAD effort to integrate a commercially available computer the size of a credit card into their control module.
“We’re building this around a Raspberry Pi Compute Module 4, which is an industrial product intended for embedded systems,” Daisey said, “so it’s going to be very cost-effective for suborbital projects we do here at Wallops.”
Engineer Scott Hesh is developing the power switching unit to complement the Raspberry Pi CM4 computer. He described it as a modular switch that distributes the system’s power supply between up to eight different hardware systems. It uses programmable software “fuses” to protect components from overheating as well as hardware fuses to protect the power switching unit.
“The avionics package takes a little less space and less mass than a current sounding rocket system,” he said. “But it’s a game changer when it comes to implementing avionics and communication. Each module measures approximately 8 by 6 inches, which is much smaller compared to our current balloon systems.”
“This whole 21st century avionics system was designed based on our Wallops philosophy of fast, agile, and cost-effective solutions for our suborbital platforms,” Hesh added.
By Karl B. Hille
NASA’s Goddard Space Flight Center, Greenbelt, Md.
0 notes
sunaleisocial · 5 days
Text
To understand cognition — and its dysfunction — neuroscientists must learn its rhythms
New Post has been published on https://sunalei.org/news/to-understand-cognition-and-its-dysfunction-neuroscientists-must-learn-its-rhythms/
To understand cognition — and its dysfunction — neuroscientists must learn its rhythms
Tumblr media
It could be very informative to observe the pixels on your phone under a microscope, but not if your goal is to understand what a whole video on the screen shows. Cognition is much the same kind of emergent property in the brain. It can only be understood by observing how millions of cells act in coordination, argues a trio of MIT neuroscientists. In a new article, they lay out a framework for understanding how thought arises from the coordination of neural activity driven by oscillating electric fields — also known as brain “waves” or “rhythms.”
Historically dismissed solely as byproducts of neural activity, brain rhythms are actually critical for organizing it, write Picower Professor Earl Miller and research scientists Scott Brincat and Jefferson Roy in Current Opinion in Behavioral Science. And while neuroscientists have gained tremendous knowledge from studying how individual brain cells connect and how and when they emit “spikes” to send impulses through specific circuits, there is also a need to appreciate and apply new concepts at the brain rhythm scale, which can span individual, or even multiple, brain regions.
“Spiking and anatomy are important, but there is more going on in the brain above and beyond that,” says senior author Miller, a faculty member in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “There’s a whole lot of functionality taking place at a higher level, especially cognition.”
The stakes of studying the brain at that scale, the authors write, might not only include understanding healthy higher-level function but also how those functions become disrupted in disease.
“Many neurological and psychiatric disorders, such as schizophrenia, epilepsy, and Parkinson’s, involve disruption of emergent properties like neural synchrony,” they write. “We anticipate that understanding how to interpret and interface with these emergent properties will be critical for developing effective treatments as well as understanding cognition.”
The emergence of thoughts
The bridge between the scale of individual neurons and the broader-scale coordination of many cells is founded on electric fields, the researchers write. Via a phenomenon called “ephaptic coupling,” the electrical field generated by the activity of a neuron can influence the voltage of neighboring neurons, creating an alignment among them. In this way, electric fields both reflect neural activity and also influence it. In a paper in 2022, Miller and colleagues showed via experiments and computational modeling that the information encoded in the electric fields generated by ensembles of neurons can be read out more reliably than the information encoded by the spikes of individual cells. In 2023 Miller’s lab provided evidence that rhythmic electrical fields may coordinate memories between regions.
At this larger scale, in which rhythmic electric fields carry information between brain regions, Miller’s lab has published numerous studies showing that lower-frequency rhythms in the so-called “beta” band originate in deeper layers of the brain’s cortex and appear to regulate the power of faster-frequency “gamma” rhythms in more superficial layers. By recording neural activity in the brains of animals engaged in working memory games, the lab has shown that beta rhythms carry “top-down” signals to control when and where gamma rhythms can encode sensory information, such as the images that the animals need to remember in the game.
Some of the lab’s latest evidence suggests that beta rhythms apply this control of cognitive processes to physical patches of the cortex, essentially acting like stencils that pattern where and when gamma can encode sensory information into memory, or retrieve it. According to this theory, which Miller calls “Spatial Computing,” beta can thereby establish the general rules of a task (for instance, the back-and-forth turns required to open a combination lock), even as the specific information content may change (for instance, new numbers when the combination changes). More generally, this structure also enables neurons to flexibly encode more than one kind of information at a time, the authors write, a widely observed neural property called “mixed selectivity.” For instance, a neuron encoding a number of the lock combination can also be assigned, based on which beta-stenciled patch it is in, the particular step of the unlocking process that the number matters for.
In the new study, Miller, Brincat, and Roy suggest another advantage consistent with cognitive control being based on an interplay of large-scale coordinated rhythmic activity: “subspace coding.” This idea postulates that brain rhythms organize the otherwise massive number of possible outcomes that could result from, say, 1,000 neurons engaging in independent spiking activity. Instead of all the many combinatorial possibilities, many fewer “subspaces” of activity actually arise, because neurons are coordinated, rather than independent. It is as if the spiking of neurons is like a flock of birds coordinating their movements. Different phases and frequencies of brain rhythms provide this coordination, aligned to amplify each other, or offset to prevent interference. For instance, if a piece of sensory information needs to be remembered, neural activity representing it can be protected from interference when new sensory information is perceived.
“Thus the organization of neural responses into subspaces can both segregate and integrate information,” the authors write.
The power of brain rhythms to coordinate and organize information processing in the brain is what enables functional cognition to emerge at that scale, the authors write. Understanding cognition in the brain, therefore, requires studying rhythms.
“Studying individual neural components in isolation — individual neurons and synapses — has made enormous contributions to our understanding of the brain and remains important,” the authors conclude. “However, it’s becoming increasingly clear that, to fully capture the brain’s complexity, those components must be analyzed in concert to identify, study, and relate their emergent properties.”
0 notes
sunaleisocial · 5 days
Text
Science communication competition brings research into the real world
New Post has been published on https://sunalei.org/news/science-communication-competition-brings-research-into-the-real-world/
Science communication competition brings research into the real world
Tumblr media
Laurence Willemet remembers countless family dinners where curious faces turned to her with shades of the same question: “What is it, exactly, that you do with robots?”
It’s a familiar scenario for MIT students exploring topics outside of their family’s scope of knowledge — distilling complex concepts without slides or jargon, plumbing the depths with nothing but lay terms. “It was during these moments,” Willemet says, “that I realized the importance of clear communication and the power of storytelling.”
Participating in the MIT Research Slam, then, felt like one of her family dinners.
The finalists in the 2024 MIT Research Slam competition met head-to-head on Wednesday, April 17 at a live, in-person showcase event. Four PhD candidates and four postdoc finalists demonstrated their topic mastery and storytelling skills by conveying complex ideas in only 180 seconds to an educated audience unfamiliar with the field or project at hand.
The Research Slam follows the format of the 3-Minute Thesis competition, which takes place annually at over 200 universities around the world. Both an exciting competition and a rigorous professional development training opportunity, the event serves an opportunity to learn for everyone involved.
One of this year’s competitors, Bhavish Dinakar, explains it this way: “Participating in the Research Slam was a fantastic opportunity to bring my research from the lab into the real world. In addition to being a helpful exercise in public speaking and communication, the three-minute time limit forces us to learn the art of distilling years of detailed experiments into a digestible story that non-experts can understand.”
Leading up to the event, participants joined training workshops on pitch content and delivery, and had the opportunity to work one-on-one with educators from the Writing and Communication Center, English Language Studies, Career Advising and Professional Development, and the Engineering Communication Labs, all of which co-sponsored and co-produced the event. This interdepartmental team offered support for the full arc of the competition, from early story development to one-on-one practice sessions.
The showcase was jovially emceed by Eric Grunwald, director of English language learning. He shared his thoughts on the night: “I was thrilled with the enthusiasm and skill shown by all the presenters in sharing their work in this context. I was also delighted by the crowd’s enthusiasm and their many insightful questions. All in all, another very successful slam.”
A panel of accomplished judges with distinct perspectives on research communication gave feedback after each of the talks: Deborah Blum, director of the Knight Science Journalism Program at MIT; Denzil Streete, senior associate dean and director of graduate education; and Emma Yee, scientific editor at the journal Cell.
Deborah Blum aptly summed up her experience: “It was a pleasure as a science journalist to be a judge and to listen to this smart group of MIT grad students and postdocs explain their research with such style, humor, and intelligence. It was a reminder of the importance the university places on the value of scientists who communicate. And this matters. We need more scientists who can explain their work clearly, explain science to the public, and help us build a science-literate world.”
After all the talks, the judges provided constructive and substantive feedback for the contestants. It was a close competition, but in the end, Bhavish Dinakar was the judges’ choice for first place, and the audience agreed, awarding him the Audience Choice award. Omar Rutledge’s strong performance earned him the runner-up position. Among the postdoc competitors, Laurence Willemet won first place and Audience Choice, with Most Kaniz Moriam earning the runner-up award.
Postdoc Kaniz Mariam noted that she felt privileged to participate in the showcase. “This experience has enhanced my ability to communicate research effectively and boosted my confidence in sharing my work with a broader audience. I am eager to apply the lessons learned from this enriching experience to future endeavors and continue contributing to MIT’s dynamic research community. The MIT Research Slam Showcase wasn’t just about winning; it was about the thrill of sharing knowledge and inspiring others. Special thanks to Chris Featherman and Elena Kallestinova from the MIT Communication Lab for their guidance in practical communication skills. ”
Double winner Laurence Willemet related the competition to experiences in her daily life. Her interest in the Research Slam was rooted in countless family dinners filled with curiosity. “‘What is it exactly that you do with robots?’ they would ask, prompting me to unravel the complexities of my research in layman’s terms. Each time, I found myself grappling with the task of distilling intricate concepts into digestible nuggets of information, relying solely on words to convey the depth of my work. It was during these moments, stripped of slides and scientific jargon, that I realized the importance of clear communication and the power of storytelling. And so, when the opportunity arose to participate in the Research Slam, it felt akin to one of those family dinners for me.”
The first place finishers received a $600 cash prize, while the runners-up and audience choice winners each received $300.
Last year’s winner in the PhD category, Neha Bokil, candidate in biology working on her dissertation in the lab of David Page, is set to represent MIT at the Three Minute Thesis Northeast Regional Competition later this month, which is organized by the Northeastern Association of Graduate Schools.
A full list of slam finalists and the titles of their talks is below.
 PhD Contestants: 
Pradeep Natarajan, Chemical Engineering (ChemE), “What can coffee-brewing teach us about brain disease?”
Omar Rutledge, Brain and Cognitive Sciences, “Investigating the effects of cannabidiol (CBD) on social anxiety disorder”
Bhavish Dinakar, ChemE, “A boost from batteries: making chemical reactions faster”
Sydney Dolan, Aeronautics and Astronautics, “Creating traffic signals for space”
 Postdocs: 
Augusto Gandia, Architecture and Planning, “Cyber modeling — computational morphogenesis via ‘smart’ models”
Laurence Willemet, Computer Science and Artificial Intelligence Laboratory, “Remote touch for teleoperation”
Most Kaniz Moriam, Mechanical Engineering, “Improving recyclability of cellulose-based textile wastes”
Mohammed Aatif Shahab, ChemE, “Eye-based human engineering for enhanced industrial safety” 
Research Slam organizers included Diana Chien, director of MIT School of Engineering Communication Lab; Elena Kallestinova, director of MIT Writing and Communication Center; Alexis Boyer, assistant director, Graduate Career Services, Career Advising and Professional Development (CAPD); Amanda Cornwall, associate director, Graduate Student Professional Development, CAPD; and Eric Grunwald, director of English Language Studies. This event was sponsored by the Office of Graduate Education, the Office of Postdoctoral Services, the Writing and Communication Center, MIT Career Advising and Professional Development, English Language Studies, and the MIT School of Engineering Communication Labs.
0 notes
sunaleisocial · 5 days
Text
Tech Today: Stay Safe with Battery Testing for Space - NASA
New Post has been published on https://sunalei.org/news/tech-today-stay-safe-with-battery-testing-for-space-nasa/
Tech Today: Stay Safe with Battery Testing for Space - NASA
Tumblr media
NASA battery safety exams influence commercial product testing
Battery safety is of paramount importance in space, where the risk of thermal runaway looms large. This dangerous reaction, characterized by a continuous escalation of temperatures within the battery, can potentially lead to a fire or explosion.
For two decades, Judy Jeevarajan was the NASA engineer in charge of testing. Thanks to that experience, batteries for everything from industrial equipment to home appliances are tested using methods she originally developed for spaceflight.
Jeevarajan began working at NASA’s Johnson Space Center in Houston in the 1990s, developing advanced battery testing technologies, eventually becoming responsible for approving all batteries flown for human spaceflight. In 1999, shuttle astronauts wanted to bring a digital camcorder aboard. Previous video cameras on the space shuttle used battery chemistries already authorized for space, but the emerging use of lithium-ion cells was new territory for space missions.
To test these batteries, her team used a hydraulic press to test the tolerance to internal short circuits and they devised a vibration test that would ensure the intense shaking at launch wouldn’t lead to failure. After the camcorder’s lithium-ion batteries were approved to fly, her work expanded to testing batteries for every consumer-grade device brought aboard the International Space Station.
For more than 100 years, Underwriters Laboratories Inc. (UL) of Northbrook, Illinois, has developed standards and testing modes for all modern appliances and technologies, ensuring everything is as safe as possible. After Jeevarajan met engineers from UL at a battery safety conference, she became a member of the UL Standards Technical Panel for battery safety. Over the next decade, she helped verify the workings of a new battery-testing machine and used her NASA experience as UL further developed and promoted the adoption of new testing methods.
Jeevarajan joined UL’s nonprofit arm full-time in 2015, bringing with her decades of experience gained working at Johnson, including her techniques for inducing thermal runaway. These are now part of a UL-defined test method for testing cells in large lithium-ion battery systems, like those found in batteries for storing power on the electrical grid.
0 notes
sunaleisocial · 6 days
Text
MIT faculty, instructors, students experiment with generative AI in teaching and learning
New Post has been published on https://sunalei.org/news/mit-faculty-instructors-students-experiment-with-generative-ai-in-teaching-and-learning/
MIT faculty, instructors, students experiment with generative AI in teaching and learning
How can MIT’s community leverage generative AI to support learning and work on campus and beyond?
At MIT’s Festival of Learning 2024, faculty and instructors, students, staff, and alumni exchanged perspectives about the digital tools and innovations they’re experimenting with in the classroom. Panelists agreed that generative AI should be used to scaffold — not replace — learning experiences.
This annual event, co-sponsored by MIT Open Learning and the Office of the Vice Chancellor, celebrates teaching and learning innovations. When introducing new teaching and learning technologies, panelists stressed the importance of iteration and teaching students how to develop critical thinking skills while leveraging technologies like generative AI.
“The Festival of Learning brings the MIT community together to explore and celebrate what we do every day in the classroom,” said Christopher Capozzola, senior associate dean for open learning. “This year’s deep dive into generative AI was reflective and practical — yet another remarkable instance of ‘mind and hand’ here at the Institute.”  
Play video
2024 Festival of Learning: Highlights
Incorporating generative AI into learning experiences 
MIT faculty and instructors aren’t just willing to experiment with generative AI — some believe it’s a necessary tool to prepare students to be competitive in the workforce. “In a future state, we will know how to teach skills with generative AI, but we need to be making iterative steps to get there instead of waiting around,” said Melissa Webster, lecturer in managerial communication at MIT Sloan School of Management. 
Some educators are revisiting their courses’ learning goals and redesigning assignments so students can achieve the desired outcomes in a world with AI. Webster, for example, previously paired written and oral assignments so students would develop ways of thinking. But, she saw an opportunity for teaching experimentation with generative AI. If students are using tools such as ChatGPT to help produce writing, Webster asked, “how do we still get the thinking part in there?”
One of the new assignments Webster developed asked students to generate cover letters through ChatGPT and critique the results from the perspective of future hiring managers. Beyond learning how to refine generative AI prompts to produce better outputs, Webster shared that “students are thinking more about their thinking.” Reviewing their ChatGPT-generated cover letter helped students determine what to say and how to say it, supporting their development of higher-level strategic skills like persuasion and understanding audiences.
Takako Aikawa, senior lecturer at the MIT Global Studies and Languages Section, redesigned a vocabulary exercise to ensure students developed a deeper understanding of the Japanese language, rather than just right or wrong answers. Students compared short sentences written by themselves and by ChatGPT and developed broader vocabulary and grammar patterns beyond the textbook. “This type of activity enhances not only their linguistic skills but stimulates their metacognitive or analytical thinking,” said Aikawa. “They have to think in Japanese for these exercises.”
While these panelists and other Institute faculty and instructors are redesigning their assignments, many MIT undergraduate and graduate students across different academic departments are leveraging generative AI for efficiency: creating presentations, summarizing notes, and quickly retrieving specific ideas from long documents. But this technology can also creatively personalize learning experiences. Its ability to communicate information in different ways allows students with different backgrounds and abilities to adapt course material in a way that’s specific to their particular context. 
Generative AI, for example, can help with student-centered learning at the K-12 level. Joe Diaz, program manager and STEAM educator for MIT pK-12 at Open Learning, encouraged educators to foster learning experiences where the student can take ownership. “Take something that kids care about and they’re passionate about, and they can discern where [generative AI] might not be correct or trustworthy,” said Diaz.
Panelists encouraged educators to think about generative AI in ways that move beyond a course policy statement. When incorporating generative AI into assignments, the key is to be clear about learning goals and open to sharing examples of how generative AI could be used in ways that align with those goals. 
The importance of critical thinking
Although generative AI can have positive impacts on educational experiences, users need to understand why large language models might produce incorrect or biased results. Faculty, instructors, and student panelists emphasized that it’s critical to contextualize how generative AI works. “[Instructors] try to explain what goes on in the back end and that really does help my understanding when reading the answers that I’m getting from ChatGPT or Copilot,” said Joyce Yuan, a senior in computer science. 
Jesse Thaler, professor of physics and director of the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions, warned about trusting a probabilistic tool to give definitive answers without uncertainty bands. “The interface and the output needs to be of a form that there are these pieces that you can verify or things that you can cross-check,” Thaler said.
When introducing tools like calculators or generative AI, the faculty and instructors on the panel said it’s essential for students to develop critical thinking skills in those particular academic and professional contexts. Computer science courses, for example, could permit students to use ChatGPT for help with their homework if the problem sets are broad enough that generative AI tools wouldn’t capture the full answer. However, introductory students who haven’t developed the understanding of programming concepts need to be able to discern whether the information ChatGPT generated was accurate or not.
Ana Bell, senior lecturer of the Department of Electrical Engineering and Computer Science and MITx digital learning scientist, dedicated one class toward the end of the semester of Course 6.100L (Introduction to Computer Science and Programming Using Python) to teach students how to use ChatGPT for programming questions. She wanted students to understand why setting up generative AI tools with the context for programming problems, inputting as many details as possible, will help achieve the best possible results. “Even after it gives you a response back, you have to be critical about that response,” said Bell. By waiting to introduce ChatGPT until this stage, students were able to look at generative AI’s answers critically because they had spent the semester developing the skills to be able to identify whether problem sets were incorrect or might not work for every case. 
A scaffold for learning experiences
The bottom line from the panelists during the Festival of Learning was that generative AI should provide scaffolding for engaging learning experiences where students can still achieve desired learning goals. The MIT undergraduate and graduate student panelists found it invaluable when educators set expectations for the course about when and how it’s appropriate to use AI tools. Informing students of the learning goals allows them to understand whether generative AI will help or hinder their learning. Student panelists asked for trust that they would use generative AI as a starting point, or treat it like a brainstorming session with a friend for a group project. Faculty and instructor panelists said they will continue iterating their lesson plans to best support student learning and critical thinking. 
Panelists from both sides of the classroom discussed the importance of generative AI users being responsible for the content they produce and avoiding automation bias — trusting the technology’s response implicitly without thinking critically about why it produced that answer and whether it’s accurate. But since generative AI is built by people making design decisions, Thaler told students, “You have power to change the behavior of those tools.”
0 notes
sunaleisocial · 6 days
Text
An AI dataset carves new paths to tornado detection
New Post has been published on https://sunalei.org/news/an-ai-dataset-carves-new-paths-to-tornado-detection/
An AI dataset carves new paths to tornado detection
Tumblr media
The return of spring in the Northern Hemisphere touches off tornado season. A tornado’s twisting funnel of dust and debris seems an unmistakable sight. But that sight can be obscured to radar, the tool of meteorologists. It’s hard to know exactly when a tornado has formed, or even why.
A new dataset could hold answers. It contains radar returns from thousands of tornadoes that have hit the United States in the past 10 years. Storms that spawned tornadoes are flanked by other severe storms, some with nearly identical conditions, that never did. MIT Lincoln Laboratory researchers who curated the dataset, called TorNet, have now released it open source. They hope to enable breakthroughs in detecting one of nature’s most mysterious and violent phenomena.
“A lot of progress is driven by easily available, benchmark datasets. We hope TorNet will lay a foundation for machine learning algorithms to both detect and predict tornadoes,” says Mark Veillette, the project’s co-principal investigator with James Kurdzo. Both researchers work in the Air Traffic Control Systems Group. 
Along with the dataset, the team is releasing models trained on it. The models show promise for machine learning’s ability to spot a twister. Building on this work could open new frontiers for forecasters, helping them provide more accurate warnings that might save lives. 
Swirling uncertainty
About 1,200 tornadoes occur in the United States every year, causing millions to billions of dollars in economic damage and claiming 71 lives on average. Last year, one unusually long-lasting tornado killed 17 people and injured at least 165 others along a 59-mile path in Mississippi.  
Yet tornadoes are notoriously difficult to forecast because scientists don’t have a clear picture of why they form. “We can see two storms that look identical, and one will produce a tornado and one won’t. We don’t fully understand it,” Kurdzo says.
A tornado’s basic ingredients are thunderstorms with instability caused by rapidly rising warm air and wind shear that causes rotation. Weather radar is the primary tool used to monitor these conditions. But tornadoes lay too low to be detected, even when moderately close to the radar. As the radar beam with a given tilt angle travels further from the antenna, it gets higher above the ground, mostly seeing reflections from rain and hail carried in the “mesocyclone,” the storm’s broad, rotating updraft. A mesocyclone doesn’t always produce a tornado.
With this limited view, forecasters must decide whether or not to issue a tornado warning. They often err on the side of caution. As a result, the rate of false alarms for tornado warnings is more than 70 percent. “That can lead to boy-who-cried-wolf syndrome,” Kurdzo says.  
In recent years, researchers have turned to machine learning to better detect and predict tornadoes. However, raw datasets and models have not always been accessible to the broader community, stifling progress. TorNet is filling this gap.
The dataset contains more than 200,000 radar images, 13,587 of which depict tornadoes. The rest of the images are non-tornadic, taken from storms in one of two categories: randomly selected severe storms or false-alarm storms (those that led a forecaster to issue a warning but that didn’t produce a tornado).
Each sample of a storm or tornado comprises two sets of six radar images. The two sets correspond to different radar sweep angles. The six images portray different radar data products, such as reflectivity (showing precipitation intensity) or radial velocity (indicating if winds are moving toward or away from the radar).
A challenge in curating the dataset was first finding tornadoes. Within the corpus of weather radar data, tornadoes are extremely rare events. The team then had to balance those tornado samples with difficult non-tornado samples. If the dataset were too easy, say by comparing tornadoes to snowstorms, an algorithm trained on the data would likely over-classify storms as tornadic.
“What’s beautiful about a true benchmark dataset is that we’re all working with the same data, with the same level of difficulty, and can compare results,” Veillette says. “It also makes meteorology more accessible to data scientists, and vice versa. It becomes easier for these two parties to work on a common problem.”
Both researchers represent the progress that can come from cross-collaboration. Veillette is a mathematician and algorithm developer who has long been fascinated by tornadoes. Kurdzo is a meteorologist by training and a signal processing expert. In grad school, he chased tornadoes with custom-built mobile radars, collecting data to analyze in new ways.
“This dataset also means that a grad student doesn’t have to spend a year or two building a dataset. They can jump right into their research,” Kurdzo says.
This project was funded by Lincoln Laboratory’s Climate Change Initiative, which aims to leverage the laboratory’s diverse technical strengths to help address climate problems threatening human health and global security.
Chasing answers with deep learning
Using the dataset, the researchers developed baseline artificial intelligence (AI) models. They were particularly eager to apply deep learning, a form of machine learning that excels at processing visual data. On its own, deep learning can extract features (key observations that an algorithm uses to make a decision) from images across a dataset. Other machine learning approaches require humans to first manually label features. 
“We wanted to see if deep learning could rediscover what people normally look for in tornadoes and even identify new things that typically aren’t searched for by forecasters,” Veillette says.
The results are promising. Their deep learning model performed similar to or better than all tornado-detecting algorithms known in literature. The trained algorithm correctly classified 50 percent of weaker EF-1 tornadoes and over 85 percent of tornadoes rated EF-2 or higher, which make up the most devastating and costly occurrences of these storms.
They also evaluated two other types of machine-learning models, and one traditional model to compare against. The source code and parameters of all these models are freely available. The models and dataset are also described in a paper submitted to a journal of the American Meteorological Society (AMS). Veillette presented this work at the AMS Annual Meeting in January.
“The biggest reason for putting our models out there is for the community to improve upon them and do other great things,” Kurdzo says. “The best solution could be a deep learning model, or someone might find that a non-deep learning model is actually better.”
TorNet could be useful in the weather community for others uses too, such as for conducting large-scale case studies on storms. It could also be augmented with other data sources, like satellite imagery or lightning maps. Fusing multiple types of data could improve the accuracy of machine learning models.
Taking steps toward operations
On top of detecting tornadoes, Kurdzo hopes that models might help unravel the science of why they form.
“As scientists, we see all these precursors to tornadoes — an increase in low-level rotation, a hook echo in reflectivity data, specific differential phase (KDP) foot and differential reflectivity (ZDR) arcs. But how do they all go together? And are there physical manifestations we don’t know about?” he asks.
Teasing out those answers might be possible with explainable AI. Explainable AI refers to methods that allow a model to provide its reasoning, in a format understandable to humans, of why it came to a certain decision. In this case, these explanations might reveal physical processes that happen before tornadoes. This knowledge could help train forecasters, and models, to recognize the signs sooner. 
“None of this technology is ever meant to replace a forecaster. But perhaps someday it could guide forecasters’ eyes in complex situations, and give a visual warning to an area predicted to have tornadic activity,” Kurdzo says.
Such assistance could be especially useful as radar technology improves and future networks potentially grow denser. Data refresh rates in a next-generation radar network are expected to increase from every five minutes to approximately one minute, perhaps faster than forecasters can interpret the new information. Because deep learning can process huge amounts of data quickly, it could be well-suited for monitoring radar returns in real time, alongside humans. Tornadoes can form and disappear in minutes.
But the path to an operational algorithm is a long road, especially in safety-critical situations, Veillette says. “I think the forecaster community is still, understandably, skeptical of machine learning. One way to establish trust and transparency is to have public benchmark datasets like this one. It’s a first step.”
The next steps, the team hopes, will be taken by researchers across the world who are inspired by the dataset and energized to build their own algorithms. Those algorithms will in turn go into test beds, where they’ll eventually be shown to forecasters, to start a process of transitioning into operations.
In the end, the path could circle back to trust.
“We may never get more than a 10- to 15-minute tornado warning using these tools. But if we could lower the false-alarm rate, we could start to make headway with public perception,” Kurdzo says. “People are going to use those warnings to take the action they need to save their lives.”
0 notes
sunaleisocial · 8 days
Text
Exploring the history of data-driven arguments in public life
New Post has been published on https://sunalei.org/news/exploring-the-history-of-data-driven-arguments-in-public-life/
Exploring the history of data-driven arguments in public life
Tumblr media
Political debates today may not always be exceptionally rational, but they are often infused with numbers. If people are discussing the economy or health care or climate change, sooner or later they will invoke statistics.
It was not always thus. Our habit of using numbers to make political arguments has a history, and William Deringer is a leading historian of it. Indeed, in recent years Deringer, an associate professor in MIT’s Program in Science, Technology, and Society (STS), has carved out a distinctive niche through his scholarship showing how quantitative reasoning has become part of public life.
In his prize-winning 2018 book “Calculated Values” (Harvard University Press), Deringer identified a time in British public life from the 1680s to the 1720s as a key moment when the practice of making numerical arguments took hold — a trend deeply connected with the rise of parliamentary power and political parties. Crucially, freedom of the press also expanded, allowing greater scope for politicians and the public to have frank discussions about the world as it was, backed by empirical evidence.
Deringer’s second book project, in progress and under contract to Yale University Press, digs further into a concept from the first book — the idea of financial discounting. This is a calculation to estimate what money (or other things) in the future is worth today, to assign those future objects a “present value.” Some skilled mathematicians understood discounting in medieval times; its use expanded in the 1600s; today it is very common in finance and is the subject of debate in relation to climate change, as experts try to estimate ideal spending levels on climate matters.
“The book is about how this particular technique came to have the power to weigh in on profound social questions,” Deringer says. “It’s basically about compound interest, and it’s at the center of the most important global question we have to confront.”
Numbers alone do not make a debate rational or informative; they can be false, misleading, used to entrench interests, and so on. Indeed, a key theme in Deringer’s work is that when quantitiative reasoning gains more ground, the question is why, and to whose benefit. In this sense his work aligns with the long-running and always-relevant approach of the Institute’s STS faculty, in thinking carefully about how technology and knowledge is applied to the world.
“The broader culture more has become attuned to STS, whether it’s conversations about AI or algorithmic fairness or climate change or energy, these are simultaneously technical and social issues,” Deringer says. “Teaching undergraduates, I’ve found the awareness of that at MIT has only increased.” For both his research and teaching, Deringer received tenure from MIT earlier this year.
Dig in, work outward
Deringer has been focused on these topics since he was an undergraduate at Harvard University.
“I found myself becoming really interested in the history of economics, the history of practical mathematics, data, statistics, and how it came to be that so much of our world is organized quantitatively,” he says.
Deringer wrote a college thesis about how England measured the land it was seizing from Ireland in the 1600s, and then, after graduating, went to work in the finance sector, which gave him a further chance to think about the application of quantification to modern life.
“That was not what I wanted to do forever, but for some of the conceptual questions I was interested in, the societal life of calculations, I found it to be a really interesting space,” Deringer says.
He returned to academia by pursuing his PhD in the history of science at Princeton University. There, in his first year of graduate school, in the archives, Deringer found 18th-century pamphlets about financial calculations concering the value of stock involved in the infamous episode of speculation known as the South Sea Bubble. That became part of his dissertation; skeptics of the South Sea Bubble were among the prominent early voices bringing data into public debates. It has also helped inform his second book.
First, though, Deringer earned his doctorate from Princeton in 2012, then spent three years as a Mellon Postdoctoral Research Fellow at Columbia University. He joined the MIT faculty in 2015. At the Institute, he finished turning his dissertation into the “Calculated Values” book — which won the 2019 Oscar Kenshur Prize for the best book from the Center for Eighteenth-Century Studies at Indiana University, and was co-winner of the 2021 Joseph J. Spengler Prize for best book from the History of Economics Society.
“My method as a scholar is to dig into the technical details, then work outward historically from them,” Deringer says.
A long historical chain
Even as Deringer was writing his first book, the idea for the second one was taking root in his mind. Those South Sea Bubble pamphets he had found while at Princeton incorporated discounting, which was intermittently present in “Calculated Values.” Deringer was intrigued by how adept 18th-century figures were at discounting.
“Something that I thought of as a very modern technique seemed to be really well-known by a lot of people in the 1720s,” he says.
At the same time, a conversation with an academic colleague in philosophy made it clear to Deringer how different conclusions about discounting had become debated in climate change policy. He soon resolved to write the “biography of a calculation” about financial discounting.
“I knew my next book had to be about this,” Deringer says. “I was very interested in the deep historical roots of discounting, and it has a lot of present urgency.”
Deringer says the book will incorporate material about the financing of English cathedrals, the heavy use of discounting in the mining industry during the Industrial Revolution, a revival of discounting in 1960s policy circles, and climate change, among other things. In each case, he is carefully looking at the interests and historical dynamics behind the use of discounting.
“For people who use discounting regularly, it’s like gravity: It’s very obvious that to be rational is to discount the future according to this formula,” Deringer says. “But if you look at history, what is thought of as rational is part of a very long historical chain of people applying this calculation in various ways, and over time that’s just how things are done. I’m really interested in pulling apart that idea that this is a sort of timeless rational calculation, as opposed to a product of this interesting history.”
Working in STS, Deringer notes, has helped encourage him to link together numerous historical time periods into one book about the numerous ways discounting has been used.
“I’m not sure that pursuing a book that stretches from the 17th century to the 21st century is something I would have done in other contexts,” Deringer says. He is also quick to credit his colleagues in STS and in other programs for helping create the scholarly environment in which he is thriving.
“I came in with a really amazing cohort of other scholars in SHASS,” Deringer notes, referring to the MIT School of Humanities, Arts, and Social Sciences. He cites others receiving tenure in the last year such as his STS colleague Robin Scheffler, historian Megan Black, and historian Caley Horan, with whom Deringer has taught graduate classes on the concept of risk in history. In all, Deringer says, the Institute has been an excellent place for him to pursue interdisciplinary work on technical thought in history.
“I work on very old things and very technical things,” Deringer says. “But I’ve found a wonderful welcoming at MIT from people in different fields who light up when they hear what I’m interested in.”
0 notes
sunaleisocial · 10 days
Text
NASA’s ORCA, AirHARP Projects Paved Way for PACE to Reach Space - NASA
New Post has been published on https://sunalei.org/news/nasas-orca-airharp-projects-paved-way-for-pace-to-reach-space-nasa/
NASA’s ORCA, AirHARP Projects Paved Way for PACE to Reach Space - NASA
Tumblr media
It took the Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission just 13 minutes to reach low-Earth orbit from Cape Canaveral Space Force Station in February 2024. It took a network of scientists at NASA and research institutions around the world more than 20 years to carefully craft and test the novel instruments that allow PACE to study the ocean and atmosphere with unprecedented clarity.
In the early 2000s, a team of scientists at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, prototyped the Ocean Radiometer for Carbon Assessment (ORCA) instrument, which ultimately became PACE’s primary research tool: the Ocean Color instrument (OCI). Then, in the 2010s, a team from the University of Maryland, Baltimore County (UMBC), worked with NASA to prototype the Hyper Angular Rainbow Polarimeter (HARP), a shoebox-sized instrument that will collect groundbreaking measurements of atmospheric aerosols.
Neither PACE’s OCI nor HARP2 — a nearly exact copy of the HARP prototype — would exist were it not for NASA’s early investments in novel technologies for Earth observation through competitive grants distributed by the agency’s Earth Science Technology Office (ESTO). Over the last 25 years, ESTO has managed the development of more than 1,100 new technologies for gathering science measurements.
“All of this investment in the tech development early on basically made it much, much easier for us to build the observatory into what it is today,” said Jeremy Werdell, an oceanographer at NASA Goddard and project scientist for PACE.
Charles “Chuck” McClain, who led the ORCA research team until his retirement in 2013, said NASA’s commitment to technology development is a cornerstone of PACE’s success. “Without ESTO, it wouldn’t have happened. It was a long and winding road, getting to where we are today.”
It was ORCA that first demonstrated a telescope rotating at a speed of six revolutions per second could synchronize perfectly with an array of charge-coupled devices — microchips that transform telescopic projections into digital images. This innovation made it possible for OCI to observe hyperspectral shades of ocean color previously unobtainable using space-based sensors.
But what made ORCA especially appealing to PACE was its pedigree of thorough testing. “One really important consideration was technology readiness,” said Gerhard Meister, who took over ORCA after McClain retired and serves as OCI instrument scientist. Compared to other ocean radiometer designs that were considered for PACE, “we had this instrument that was ready, and we had shown that it would work.”
Technology readiness also made HARP an appealing solution to PACE’s polarimeter challenge. Mission engineers needed an instrument powerful enough to ensure PACE’s ocean color measurements weren’t jeopardized by atmospheric interference, but compact enough to fly on the PACE observatory platform.
By the time Vanderlei Martins, an atmospheric scientist at UMBC, first spoke to Werdell about incorporating a version of HARP into PACE in 2016, he had proven the technology with AirHARP, an airplane-mounted version of HARP, and was using an ESTO award to prepare HARP CubeSat for space.
HARP2 relies on the same optical system developed through AirHARP and HARP CubeSat. A wide-angle lens observes Earth’s surface from up to 60 different viewing angles with a spatial resolution of 1.62 miles (2.6kilometers) per pixel, all without any moving parts. This gives researchers a global view of aerosols from a tiny instrument that consumes very little energy.
Were it not for NASA’s early support of AirHARP and HARP CubeSat, said Martins, “I don’t think we would have HARP2 today.” He added: “We achieved every single goal, every single element, and that was because ESTO stayed with us.”
That support continues making a difference to researchers like Jessie Turner, an oceanographer at the University of Connecticut who will use PACE to study algal blooms and water clarity in the Chesapeake Bay.
“For my application that I’m building for early adopters of PACE data, I actually think that polarimeters are going to be really useful because that’s something we haven’t fully done before for the ocean,” Turner said. “Polarimetric data can actually help us see what kind of particles are in the water.”
Without the early development and test-drives of the instruments from McClain’s and Martins’ teams, PACE as we know it wouldn’t exist.
“It all kind of fell in place in a timely manner that allowed us to mature the instruments, along with the science, just in time for PACE,” said McClain.
To explore current opportunities to collaborate with NASA on new technologies for studying Earth, visit ESTO’s open solicitations page here.
By Gage Taylor NASA’s Goddard Space Flight Center, Greenbelt, Md.
0 notes