A significant MIT investment in advanced manufacturing innovation

These are not your grandmother’s fibers and textiles. These are tomorrow’s functional fabrics — designed and prototyped in Cambridge, Massachusetts, and manufactured across a network of U.S. partners. This is the vision of the new headquarters for the Manufacturing USA institute called Advanced Functional Fabrics of America (AFFOA) that opened Monday at 12 Emily Street, steps away from the MIT campus.
AFFOA headquarters represents a significant MIT investment in advanced manufacturing innovation. This facility includes a Fabric Discovery Center that provides end-to-end prototyping from fiber design to system integration of new textile-based products, and will be used for education and workforce development in the Cambridge and greater Boston community. AFFOA headquarters also includes startup incubation space for companies spun out from MIT and other partners who are innovating advanced fabrics and fibers for applications ranging from apparel and consumer electronics to automotive and medical devices.
MIT was a founding member of the AFFOA team that partnered with the Department of Defense in April 2016 to launch this new institute as a public-private partnership through an independent nonprofit also founded by MIT. AFFOA’s chief executive officer is Yoel Fink. Prior to his current role, Fink led the AFFOA proposal last year as professor of materials science and engineering and director of the Research Laboratory for Electronics at MIT, with his vision to create a “fabric revolution.” That revolution under Fink’s leadership was grounded in new fiber materials and textile manufacturing processes for fabrics that see, hear, sense, communicate, store and convert energy, and monitor health.
From the perspectives of research, education, and entrepreneurship, MIT engagement in AFFOA draws from many strengths. These include the multifunctional drawn fibers developed by Fink and others to include electronic capabilities within fibers that include multiple materials and function as devices. That fiber concept developed at MIT has been applied to key challenges in the defense sector through MIT’s Institute for Soldier Nanotechnology, commercialization through a startup called OmniGuide that is now OmniGuide Surgical for laser surgery devices, and extensions to several new areas including neural probes by Polina Anikeeva, MIT associate professor of materials science and engineering. Beyond these diverse uses of fiber devices, MIT faculty including Greg Rutledge, the Lammot du Pont Professor of Chemical Engineering, have also led innovation in predictive modeling and design of polymer nanofibers, fiber processing and characterization, and self-assembly of woven and nonwoven filters and textiles for diverse applications and industries.

Initial size enables speedy analysis of laparoscopic procedures

Laparoscopy is a surgical technique in which a fiber-optic camera is inserted into a patient’s abdominal cavity to provide a video feed that guides the surgeon through a minimally invasive procedure.
Laparoscopic surgeries can take hours, and the video generated by the camera — the laparoscope — is often recorded. Those recordings contain a wealth of information that could be useful for training both medical providers and computer systems that would aid with surgery, but because reviewing them is so time consuming, they mostly sit idle.
Researchers at MIT and Massachusetts General Hospital hope to change that, with a new system that can efficiently search through hundreds of hours of video for events and visual features that correspond to a few training examples.
In work they presented at the International Conference on Robotics and Automation this month, the researchers trained their system to recognize different stages of an operation, such as biopsy, tissue removal, stapling, and wound cleansing.
But the system could be applied to any analytical question that doctors deem worthwhile. It could, for instance, be trained to predict when particular medical instruments — such as additional staple cartridges — should be prepared for the surgeon’s use, or it could sound an alert if a surgeon encounters rare, aberrant anatomy.
“Surgeons are thrilled by all the features that our work enables,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and senior author on the paper. “They are thrilled to have the surgical tapes automatically segmented and indexed, because now those tapes can be used for training. If we want to learn about phase two of a surgery, we know exactly where to go to look for that segment. We don’t have to watch every minute before that. The other thing that is extraordinarily exciting to the surgeons is that in the future, we should be able to monitor the progression of the operation in real-time.”
Joining Rus on the paper are first author Mikhail Volkov, who was a postdoc in Rus’ group when the work was done and is now a quantitative analyst at SMBC Nikko Securities in Tokyo; Guy Rosman, another postdoc in Rus’ group; and Daniel Hashimoto and Ozanan Meireles of Massachusetts General Hospital (MGH).

Increases processing speed while reducing energy consumption

For decades, computer chips have increased efficiency by using “caches,” small, local memory banks that store frequently used data and cut down on time- and energy-consuming communication with off-chip memory.
Today’s chips generally have three or even four different levels of cache, each of which is more capacious but slower than the last. The sizes of the caches represent a compromise between the needs of different kinds of programs, but it’s rare that they’re exactly suited to any one program.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have designed a system that reallocates cache access on the fly, to create new “cache hierarchies” tailored to the needs of particular programs.
The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that, compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent.
“What you would like is to take these distributed physical memory resources and build application-specific hierarchies that maximize the performance for your particular application,” says Daniel Sanchez, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS), whose group developed the new system.
“And that depends on many things in the application. What’s the size of the data it accesses? Does it have hierarchical reuse, so that it would benefit from a hierarchy of progressively larger memories? Or is it scanning through a data structure, so we’d be better off having a single but very large level? How often does it access data? How much would its performance suffer if we just let data drop to main memory? There are all these different tradeoffs.”

The extremely high resolution of 3D printers

Today’s 3-D printers have a resolution of 600 dots per inch, which means that they could pack a billion tiny cubes of different materials into a volume that measures just 1.67 cubic inches.
Such precise control of printed objects’ microstructure gives designers commensurate control of the objects’ physical properties — such as their density or strength, or the way they deform when subjected to stresses. But evaluating the physical effects of every possible combination of even just two materials, for an object consisting of tens of billions of cubes, would be prohibitively time consuming.
So researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new design system that catalogues the physical properties of a huge number of tiny cube clusters. These clusters can then serve as building blocks for larger printable objects. The system thus takes advantage of physical measurements at the microscopic scale, while enabling computationally efficient evaluation of macroscopic designs.
“Conventionally, people design 3-D prints manually,” says Bo Zhu, a postdoc at CSAIL and first author on the paper. “But when you want to have some higher-level goal — for example, you want to design a chair with maximum stiffness or design some functional soft [robotic] gripper — then intuition or experience is maybe not enough. Topology optimization, which is the focus of our paper, incorporates the physics and simulation in the design loop. The problem for current topology optimization is that there is a gap between the hardware capabilities and the software. Our algorithm fills that gap.”

Zhu and his MIT colleagues presented their work this week at Siggraph, the premier graphics conference. Joining Zhu on the paper are Wojciech Matusik, an associate professor of electrical engineering and computer science; Mélina Skouras, a postdoc in Matusik’s group; and Desai Chen, a graduate student in electrical engineering and computer science.
Points in space
The MIT researchers begin by defining a space of physical properties, in which any given microstructure will assume a particular location. For instance, there are three standard measures of a material’s stiffness: One describes its deformation in the direction of an applied force, or how far it can be compressed or stretched; one describes its deformation in directions perpendicular to an applied force, or how much its sides bulge when it’s squeezed or contract when it’s stretched; and the third measures its response to shear, or a force that causes different layers of the material to shift relative to each other.
Those three measures define a three-dimensional space, and any particular combination of them defines a point in that space.

Power consumption could help make the systems portable

In recent years, the best-performing artificial-intelligence systems — in areas such as autonomous driving, speech recognition, computer vision, and automatic translation — have come courtesy of software systems known as neural networks.
But neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.
Last year, MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices.
Now, Sze and her colleagues have approached the same problem from the opposite direction, with a battery of techniques for designing more energy-efficient neural networks. First, they developed an analytic method that can determine how much power a neural network will consume when run on a particular type of hardware. Then they used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.
The researchers describe the work in a paper they’re presenting next week at the Computer Vision and Pattern Recognition Conference. In the paper, they report that the methods offered as much as a 73 percent reduction in power consumption over the standard implementation of neural networks, and as much as a 43 percent reduction over the best previous method for paring the networks down.