id
stringlengths
1
169
pr-title
stringlengths
2
190
pr-article
stringlengths
0
65k
pr-summary
stringlengths
47
4.27k
sc-title
stringclasses
2 values
sc-article
stringlengths
0
2.03M
sc-abstract
stringclasses
2 values
sc-section_names
sequencelengths
0
0
sc-sections
sequencelengths
0
0
sc-authors
sequencelengths
0
0
source
stringclasses
2 values
Topic
stringclasses
10 values
Citation
stringlengths
4
4.58k
Paper_URL
stringlengths
4
213
News_URL
stringlengths
4
119
pr-summary-and-article
stringlengths
49
66.1k
0
New York City's Vaccine Passport Plan Renews Online Privacy Debate
When New York City announced on Tuesday that it would soon require people to show proof of at least one coronavirus vaccine shot to enter businesses, Mayor Bill de Blasio said the system was "simple - just show it and you're in." Less simple was the privacy debate that the city reignited. Vaccine passports , which show proof of vaccination, often in electronic form such as an app, are the bedrock of Mr. de Blasio's plan. For months, these records - also known as health passes or digital health certificates - have been under discussion around the world as a tool to allow vaccinated people, who are less at risk from the virus, to gather safely. New York will be the first U.S. city to include these passes in a vaccine mandate, potentially setting off similar actions elsewhere. But the mainstreaming of these credentials could also usher in an era of increased digital surveillance, privacy researchers said. That's because vaccine passes may enable location tracking, even as there are few rules about how people's digital vaccine data should be stored and how it can be shared. While existing privacy laws limit the sharing of information among medical providers, there is no such rule for when people upload their own data onto an app.
New York's City's mandate that people must show proof at least one coronavirus vaccine shot, or vaccine passport, to enter businesses has revived the debate of whether these digital certificates undermine online privacy. The applications may enable location tracking, and privacy researchers are worried about digital surveillance escalating. The New York Civil Liberties Union's Allie Bohm said without restrictions, presenting a digital vaccination passport whenever people enter a public place could lead to a "global map of where people are going," which could be sold or turned over to third parties, law enforcement, or government authorities. Privacy advocates are not reassured by vaccine pass developers' claims that their products uphold privacy, given that authoritarian regimes have exploited COVID-19 contact-tracing apps for surveillance or criminal investigation.
[]
[]
[]
scitechnews
None
None
None
None
New York's City's mandate that people must show proof at least one coronavirus vaccine shot, or vaccine passport, to enter businesses has revived the debate of whether these digital certificates undermine online privacy. The applications may enable location tracking, and privacy researchers are worried about digital surveillance escalating. The New York Civil Liberties Union's Allie Bohm said without restrictions, presenting a digital vaccination passport whenever people enter a public place could lead to a "global map of where people are going," which could be sold or turned over to third parties, law enforcement, or government authorities. Privacy advocates are not reassured by vaccine pass developers' claims that their products uphold privacy, given that authoritarian regimes have exploited COVID-19 contact-tracing apps for surveillance or criminal investigation. When New York City announced on Tuesday that it would soon require people to show proof of at least one coronavirus vaccine shot to enter businesses, Mayor Bill de Blasio said the system was "simple - just show it and you're in." Less simple was the privacy debate that the city reignited. Vaccine passports , which show proof of vaccination, often in electronic form such as an app, are the bedrock of Mr. de Blasio's plan. For months, these records - also known as health passes or digital health certificates - have been under discussion around the world as a tool to allow vaccinated people, who are less at risk from the virus, to gather safely. New York will be the first U.S. city to include these passes in a vaccine mandate, potentially setting off similar actions elsewhere. But the mainstreaming of these credentials could also usher in an era of increased digital surveillance, privacy researchers said. That's because vaccine passes may enable location tracking, even as there are few rules about how people's digital vaccine data should be stored and how it can be shared. While existing privacy laws limit the sharing of information among medical providers, there is no such rule for when people upload their own data onto an app.
1
Facebook Disables Accounts Tied to NYU Research Project
Facebook Inc. has disabled the personal accounts of a group of New York University researchers studying political ads on the social network, claiming they are scraping data in violation of the company's terms of service. The company also cut off the researchers' access to Facebook's APIs, technology that is used to share data from Facebook to other apps or services, and disabled other apps and Pages associated with the research project, according to Mike Clark, a director of product management on Facebook's privacy team.
Facebook has disabled the personal accounts of New York University (NYU) scientists studying political ads on the social network, alleging their extraction of data violates its terms of service. Facebook's Mike Clark said the company also blocked their access to Facebook's application programming interfaces, used to share network data to other apps or services, and disabled additional apps and pages linked to the NYU Ad Observatory project. The initiative has participants download a browser extension that gathers data on the political ads they see on Facebook, and how they were targeted. NYU's Laura Edelson said Facebook has basically terminated the university's effort to study misinformation in political ads "using user privacy, a core belief that we have always put first in our work, as a pretext for doing this."
[]
[]
[]
scitechnews
None
None
None
None
Facebook has disabled the personal accounts of New York University (NYU) scientists studying political ads on the social network, alleging their extraction of data violates its terms of service. Facebook's Mike Clark said the company also blocked their access to Facebook's application programming interfaces, used to share network data to other apps or services, and disabled additional apps and pages linked to the NYU Ad Observatory project. The initiative has participants download a browser extension that gathers data on the political ads they see on Facebook, and how they were targeted. NYU's Laura Edelson said Facebook has basically terminated the university's effort to study misinformation in political ads "using user privacy, a core belief that we have always put first in our work, as a pretext for doing this." Facebook Inc. has disabled the personal accounts of a group of New York University researchers studying political ads on the social network, claiming they are scraping data in violation of the company's terms of service. The company also cut off the researchers' access to Facebook's APIs, technology that is used to share data from Facebook to other apps or services, and disabled other apps and Pages associated with the research project, according to Mike Clark, a director of product management on Facebook's privacy team.
2
Teenage Girls in Northern Nigeria 'Open Their Minds' with Robotics
KANO, Nigeria, Aug 2 (Reuters) - Teenage girls in the northern Nigerian city of Kano are learning robotics, computing and other STEM subjects as part of an innovative project that challenges local views of what girls should be doing in a socially conservative Muslim society. In a place where girls are expected to marry young and their education is often cut short, the Kabara NGO aims to widen their world view through activities such as building machines, using common software programmes and learning about maths and science. "I came to Kabara to learn robotics and I have created a lot of things," said Fatima Zakari, 12. One of her creations is a battery-powered spin art device to create distinctive artwork. "I am happy to share this with my younger ones and the community at large for the growth of the society," she said proudly. Kabara is the brainchild of engineer Hadiza Garbati, who wanted to raise the aspirations of northern Nigerian girls and help them develop skills they might harness to start their own small businesses or enroll at university. Since it started in Kano in 2016, Kabara has trained more than 200 girls, and Garbati is working on expanding her project to other northern cities. It is a rare educational success story in northern Nigeria, where more than 1,000 children have been kidnapped from their schools by ransom seekers since December, causing many more to drop out because their parents are fearful of abductions. Kabara, located in a safe area in the heart of Kano, has been unaffected by the crisis. Garbati said she had overcome resistance from some parents by being highly respectful of Islamic traditions. The girls wear their hijabs during sessions. Crucial to her success has been support from Nasiru Wada, a close adviser to the Emir of Kano, a figurehead who has moral authority in the community. Wada holds the traditional title of Magajin Garin Kano. "The main reason why we are doing this is to encourage them, to open their minds," said Wada. "Tradition, not to say discourages, but does not put enough emphasis on the education of the girl child, with the belief that oh, at a certain age, she will get married," he said. "It is good to encourage the girl child to study not only the humanities but the science subjects as well because we need healthcare workers, we need science teachers," he said, adding that even married women needed skills to manage their affairs. Our Standards: The Thomson Reuters Trust Principles.
The Kabara non-governmental organization (NGO) in northern Nigeria is helping teenage girls in the city of Kano to learn robotics, computing, and other science, technology, engineering, and math subjects. Founded in 2016 by engineer Hadiza Garbati, Kabara has trained over 200 girls, with plans to extend its reach to other northern Nigerian cities. Conservative Muslim traditions in the region often deemphasize girls' education; the NGO hopes to broaden their horizons through activities like building machines, using common software programs, and learning math and science. Said Kabara supporter Nasiru Wada, an adviser to Kano's emir, "The main reason why we are doing this is to encourage them, to open their minds."
[]
[]
[]
scitechnews
None
None
None
None
The Kabara non-governmental organization (NGO) in northern Nigeria is helping teenage girls in the city of Kano to learn robotics, computing, and other science, technology, engineering, and math subjects. Founded in 2016 by engineer Hadiza Garbati, Kabara has trained over 200 girls, with plans to extend its reach to other northern Nigerian cities. Conservative Muslim traditions in the region often deemphasize girls' education; the NGO hopes to broaden their horizons through activities like building machines, using common software programs, and learning math and science. Said Kabara supporter Nasiru Wada, an adviser to Kano's emir, "The main reason why we are doing this is to encourage them, to open their minds." KANO, Nigeria, Aug 2 (Reuters) - Teenage girls in the northern Nigerian city of Kano are learning robotics, computing and other STEM subjects as part of an innovative project that challenges local views of what girls should be doing in a socially conservative Muslim society. In a place where girls are expected to marry young and their education is often cut short, the Kabara NGO aims to widen their world view through activities such as building machines, using common software programmes and learning about maths and science. "I came to Kabara to learn robotics and I have created a lot of things," said Fatima Zakari, 12. One of her creations is a battery-powered spin art device to create distinctive artwork. "I am happy to share this with my younger ones and the community at large for the growth of the society," she said proudly. Kabara is the brainchild of engineer Hadiza Garbati, who wanted to raise the aspirations of northern Nigerian girls and help them develop skills they might harness to start their own small businesses or enroll at university. Since it started in Kano in 2016, Kabara has trained more than 200 girls, and Garbati is working on expanding her project to other northern cities. It is a rare educational success story in northern Nigeria, where more than 1,000 children have been kidnapped from their schools by ransom seekers since December, causing many more to drop out because their parents are fearful of abductions. Kabara, located in a safe area in the heart of Kano, has been unaffected by the crisis. Garbati said she had overcome resistance from some parents by being highly respectful of Islamic traditions. The girls wear their hijabs during sessions. Crucial to her success has been support from Nasiru Wada, a close adviser to the Emir of Kano, a figurehead who has moral authority in the community. Wada holds the traditional title of Magajin Garin Kano. "The main reason why we are doing this is to encourage them, to open their minds," said Wada. "Tradition, not to say discourages, but does not put enough emphasis on the education of the girl child, with the belief that oh, at a certain age, she will get married," he said. "It is good to encourage the girl child to study not only the humanities but the science subjects as well because we need healthcare workers, we need science teachers," he said, adding that even married women needed skills to manage their affairs. Our Standards: The Thomson Reuters Trust Principles.
3
3D 'Heat Map' Animation Shows How Seizures Spread in the Brains of Epilepsy Patients
For 29 years, from the time she was 12, Rashetta Higgins had been wracked by epileptic seizures - as many as 10 a week - in her sleep, at school and at work. She lost four jobs over 10 years. One seizure brought her down as she was climbing concrete stairs, leaving a bloody scene and a bad gash near her eye. A seizure struck in 2005 while she was waiting at the curb for a bus. "I fell down right when the bus was pulling up," she says. "My friend grabbed me just in time. I fell a lot. I've had concussions. I've gone unconscious. It has put a lot of wear and tear on my body." Then, in 2016, Higgins' primary-care doctor, Mary Clark, at La Clinica North Vallejo, referred her to UC San Francisco's Department of Neurology, marking the beginning of her journey back to health and her contribution to new technology that will make it easier to locate seizure activity in the brain. Medication couldn't slow her seizures or diminish their severity, so the UCSF Epilepsy Center team recommended surgery to first record and pinpoint the location of the bad activity and then remove the brain tissue that was triggering the seizures. In April, 2019, Higgins was admitted to UCSF's 10-bed Epilepsy Monitoring Unit at UCSF Helen Diller Medical Center at Parnassus Heights, where surgeons implanted more than 150 electrodes. EEGs tracked her brain wave activity around the clock to pinpoint the region of tissue that had triggered her brainstorms for 29 years. In just one week, Higgins had 10 seizures, and each time, the gently undulating EEG tracings recording normal brain activity jerked suddenly into the tell-tale jagged peaks and valleys indicating a seizure. To find the site of a seizure in a patient's brain, experts currently look at brain waves by reviewing hundreds of squiggly lines on a screen, watching how high and low the peaks and valleys go (the amplitude) and how fast these patterns repeat or oscillate (the frequency). But during a seizure, electrical activity in the brain spikes so fast that the many EEG traces can be tough to read. "We look for the electrodes with the largest change," says Robert Knowlton , MD, professor of Neurology, the medical director of the UCSF Seizure Disorders Surgery Program and a member of the UCSF Weill Institute of Neurosciences . "Higher frequencies are weighted more. They usually have the lowest amplitude, so we look on the EEG for a combination of the two extremes. It's visual - not completely quantitative. It's complicated to put together." Enter Jonathan Kleen , MD, PhD, assistant professor of Neurology and a member of the UCSF Weill Institute of Neurosciences . Trained as both a neuroscientist and a computer scientist, he quickly saw the potential of a software strategy to clear up the picture - literally. "The field of information visualization has really matured in the last 20 years," Kleen said. "It's a process of taking huge volumes of data with many details - space, time, frequency, intensity and other things - and distilling them into a single intuitive visualization like a colorful picture or video." Kleen developed a program that translates the hundreds of EEG traces into a 3-D movie showing activity in all recorded locations in the brain. The result is a multicolored 3-D heat map that looks very much like a meteorologist's hurricane weather map. The heat map's cinematic representation of seizures, projected onto a 3-D reconstruction of the patient's own brain, helps one plainly see where a seizure starts and track where, and how fast, it spreads through the brain. The heat map closely aligns with the traditional visual analysis, but it's simpler to understand and is personalized to the patient's own brain. "To see it on the heat map makes it much easier to define where the seizure starts, and whether there's more than one trigger site," Knowlton said. "And it is much better at seeing how the seizure spreads. With conventional methods, we have no idea where it's spreading." Researchers are using the new technology at UCSF to gauge how well it pinpoints the brain's seizure trigger compared with the standard visual approach. So far, the heat maps have been used to help identify the initial seizure site and the spread of a seizure through the brain in more than 115 patients. Kleen's strategy is disarmingly simple. To distinguish seizures from normal brain activity, he added up the lengths of the lines on an EEG. The rapid changes measured during a seizure produce a lengthy cumulative line, while gently undulating brain waves make much shorter lines. Kleen's software translated these lengths into different colors, and the visualization was born. The technology proved pivotal in Higgins' treatment. "Before her recordings, we had feared that Rashetta had multiple seizure-generating areas," Kleen said. "But her video made it plainly obvious that there was a single problem area, and the bad activity was rapidly spreading from that primary hot spot." The journal Epilepsia put Kleen's and Knowlton's 3-D heat map technology on the cover, and the researchers made their software open-source, so others can improve upon it. "It's been a labor of love to get this technology to come to fruition" Kleen said. "I feel very strongly that to make progress in the field we need to share technologies, especially things that will help patients." Higgins has been captivated by the 3-D heat maps of her brain. "It was amazing," she said. "It was like, 'That's my brain. I'm watching my brain function.'" And the surgery has been a life-changing success. Higgins hasn't had a seizure in more than two years, feels mentally sharp, and is looking for a job. "When I wake up, I'm right on it every morning," she said. "I waited for this day for a long, long time."
University of California, San Francisco (UCSF) neuroscientists used an algorithm to visualize in three dimensions hundreds of electroencephalography (EEG) traces in the brain, resulting in an animated heat map of seizures in epileptic patients. UCSF's Robert Knowlton said the tool "makes it much easier to define where the seizure starts, and whether there's more than one trigger site," as well as visualizing the seizure's propagation. The algorithm differentiates seizures from the normal activity of the brain by adding the lengths of the lines on an EEG, and translating them into distinct colors. The heat maps have been used to help identify the initial seizure point and the spread of a seizure through the brain in over 115 patients.
[]
[]
[]
scitechnews
None
None
None
None
University of California, San Francisco (UCSF) neuroscientists used an algorithm to visualize in three dimensions hundreds of electroencephalography (EEG) traces in the brain, resulting in an animated heat map of seizures in epileptic patients. UCSF's Robert Knowlton said the tool "makes it much easier to define where the seizure starts, and whether there's more than one trigger site," as well as visualizing the seizure's propagation. The algorithm differentiates seizures from the normal activity of the brain by adding the lengths of the lines on an EEG, and translating them into distinct colors. The heat maps have been used to help identify the initial seizure point and the spread of a seizure through the brain in over 115 patients. For 29 years, from the time she was 12, Rashetta Higgins had been wracked by epileptic seizures - as many as 10 a week - in her sleep, at school and at work. She lost four jobs over 10 years. One seizure brought her down as she was climbing concrete stairs, leaving a bloody scene and a bad gash near her eye. A seizure struck in 2005 while she was waiting at the curb for a bus. "I fell down right when the bus was pulling up," she says. "My friend grabbed me just in time. I fell a lot. I've had concussions. I've gone unconscious. It has put a lot of wear and tear on my body." Then, in 2016, Higgins' primary-care doctor, Mary Clark, at La Clinica North Vallejo, referred her to UC San Francisco's Department of Neurology, marking the beginning of her journey back to health and her contribution to new technology that will make it easier to locate seizure activity in the brain. Medication couldn't slow her seizures or diminish their severity, so the UCSF Epilepsy Center team recommended surgery to first record and pinpoint the location of the bad activity and then remove the brain tissue that was triggering the seizures. In April, 2019, Higgins was admitted to UCSF's 10-bed Epilepsy Monitoring Unit at UCSF Helen Diller Medical Center at Parnassus Heights, where surgeons implanted more than 150 electrodes. EEGs tracked her brain wave activity around the clock to pinpoint the region of tissue that had triggered her brainstorms for 29 years. In just one week, Higgins had 10 seizures, and each time, the gently undulating EEG tracings recording normal brain activity jerked suddenly into the tell-tale jagged peaks and valleys indicating a seizure. To find the site of a seizure in a patient's brain, experts currently look at brain waves by reviewing hundreds of squiggly lines on a screen, watching how high and low the peaks and valleys go (the amplitude) and how fast these patterns repeat or oscillate (the frequency). But during a seizure, electrical activity in the brain spikes so fast that the many EEG traces can be tough to read. "We look for the electrodes with the largest change," says Robert Knowlton , MD, professor of Neurology, the medical director of the UCSF Seizure Disorders Surgery Program and a member of the UCSF Weill Institute of Neurosciences . "Higher frequencies are weighted more. They usually have the lowest amplitude, so we look on the EEG for a combination of the two extremes. It's visual - not completely quantitative. It's complicated to put together." Enter Jonathan Kleen , MD, PhD, assistant professor of Neurology and a member of the UCSF Weill Institute of Neurosciences . Trained as both a neuroscientist and a computer scientist, he quickly saw the potential of a software strategy to clear up the picture - literally. "The field of information visualization has really matured in the last 20 years," Kleen said. "It's a process of taking huge volumes of data with many details - space, time, frequency, intensity and other things - and distilling them into a single intuitive visualization like a colorful picture or video." Kleen developed a program that translates the hundreds of EEG traces into a 3-D movie showing activity in all recorded locations in the brain. The result is a multicolored 3-D heat map that looks very much like a meteorologist's hurricane weather map. The heat map's cinematic representation of seizures, projected onto a 3-D reconstruction of the patient's own brain, helps one plainly see where a seizure starts and track where, and how fast, it spreads through the brain. The heat map closely aligns with the traditional visual analysis, but it's simpler to understand and is personalized to the patient's own brain. "To see it on the heat map makes it much easier to define where the seizure starts, and whether there's more than one trigger site," Knowlton said. "And it is much better at seeing how the seizure spreads. With conventional methods, we have no idea where it's spreading." Researchers are using the new technology at UCSF to gauge how well it pinpoints the brain's seizure trigger compared with the standard visual approach. So far, the heat maps have been used to help identify the initial seizure site and the spread of a seizure through the brain in more than 115 patients. Kleen's strategy is disarmingly simple. To distinguish seizures from normal brain activity, he added up the lengths of the lines on an EEG. The rapid changes measured during a seizure produce a lengthy cumulative line, while gently undulating brain waves make much shorter lines. Kleen's software translated these lengths into different colors, and the visualization was born. The technology proved pivotal in Higgins' treatment. "Before her recordings, we had feared that Rashetta had multiple seizure-generating areas," Kleen said. "But her video made it plainly obvious that there was a single problem area, and the bad activity was rapidly spreading from that primary hot spot." The journal Epilepsia put Kleen's and Knowlton's 3-D heat map technology on the cover, and the researchers made their software open-source, so others can improve upon it. "It's been a labor of love to get this technology to come to fruition" Kleen said. "I feel very strongly that to make progress in the field we need to share technologies, especially things that will help patients." Higgins has been captivated by the 3-D heat maps of her brain. "It was amazing," she said. "It was like, 'That's my brain. I'm watching my brain function.'" And the surgery has been a life-changing success. Higgins hasn't had a seizure in more than two years, feels mentally sharp, and is looking for a job. "When I wake up, I'm right on it every morning," she said. "I waited for this day for a long, long time."
4
Endlessly Changing Playground Teaches AIs to Multitask
What did they learn? Some of DeepMind's XLand AIs played 700,000 different games in 4,000 different worlds, encountering 3.4 million unique tasks in total. Instead of learning the best thing to do in each situation, which is what most existing reinforcement-learning AIs do, the players learned to experiment - moving objects around to see what happened, or using one object as a tool to reach another object or hide behind - until they beat the particular task. In the videos you can see the AIs chucking objects around until they stumble on something useful: a large tile, for example, becomes a ramp up to a platform. It is hard to know for sure if all such outcomes are intentional or happy accidents, say the researchers. But they happen consistently. AIs that learned to experiment had an advantage in most tasks, even ones that they had not seen before. The researchers found that after just 30 minutes of training on a complex new task, the XLand AIs adapted to it quickly. But AIs that had not spent time in XLand could not learn these tasks at all.
Alphabet's DeepMind Technologies has developed a videogame-like three-dimensional world that allows artificial intelligence (AI) agents to learn skills by experimenting and exploring. Those skills can be used to perform tasks they have not performed before. XLand is managed by a central AI that controls the environment, game rules, and number of players, with reinforcement learning helping the playground manager and players to improve over time. The AI players played 700,000 different games in 4,000 different worlds and performed 3.4 million unique tasks. Rather than learning the best thing to do in each scenario, the AI players experimented until they completed the task at hand.
[]
[]
[]
scitechnews
None
None
None
None
Alphabet's DeepMind Technologies has developed a videogame-like three-dimensional world that allows artificial intelligence (AI) agents to learn skills by experimenting and exploring. Those skills can be used to perform tasks they have not performed before. XLand is managed by a central AI that controls the environment, game rules, and number of players, with reinforcement learning helping the playground manager and players to improve over time. The AI players played 700,000 different games in 4,000 different worlds and performed 3.4 million unique tasks. Rather than learning the best thing to do in each scenario, the AI players experimented until they completed the task at hand. What did they learn? Some of DeepMind's XLand AIs played 700,000 different games in 4,000 different worlds, encountering 3.4 million unique tasks in total. Instead of learning the best thing to do in each situation, which is what most existing reinforcement-learning AIs do, the players learned to experiment - moving objects around to see what happened, or using one object as a tool to reach another object or hide behind - until they beat the particular task. In the videos you can see the AIs chucking objects around until they stumble on something useful: a large tile, for example, becomes a ramp up to a platform. It is hard to know for sure if all such outcomes are intentional or happy accidents, say the researchers. But they happen consistently. AIs that learned to experiment had an advantage in most tasks, even ones that they had not seen before. The researchers found that after just 30 minutes of training on a complex new task, the XLand AIs adapted to it quickly. But AIs that had not spent time in XLand could not learn these tasks at all.
5
CISA Launches Initiative to Combat Ransomware
About the Author Chris Riotta is a staff writer at FCW covering government procurement and technology policy. Chris joined FCW after covering U.S. politics for three years at The Independent. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) officially launched the Joint Cyber Defense Collaborative (JCDC), an anti-ransomware initiative supported by public-private information sharing. CISA director Jen Easterly said the organization was created to develop cyber defense strategies and exchange insights between the federal government and private-sector partners. A CISA webpage said interagency officials will work in the JCDC office to lead the development of U.S. cyber defense plans that incorporate best practices for dealing with cyber intrusions; a key goal is coordinating public-private strategies to combat cyberattacks, particularly ransomware, while engineering incident response frameworks. Said security vendor CrowdStrike Services' Shawn Henry, the JCDC "will create an inclusive, collaborative environment to develop proactive cyber defense strategies" and help "implement coordinated operations to prevent and respond to cyberattacks."
[]
[]
[]
scitechnews
None
None
None
None
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) officially launched the Joint Cyber Defense Collaborative (JCDC), an anti-ransomware initiative supported by public-private information sharing. CISA director Jen Easterly said the organization was created to develop cyber defense strategies and exchange insights between the federal government and private-sector partners. A CISA webpage said interagency officials will work in the JCDC office to lead the development of U.S. cyber defense plans that incorporate best practices for dealing with cyber intrusions; a key goal is coordinating public-private strategies to combat cyberattacks, particularly ransomware, while engineering incident response frameworks. Said security vendor CrowdStrike Services' Shawn Henry, the JCDC "will create an inclusive, collaborative environment to develop proactive cyber defense strategies" and help "implement coordinated operations to prevent and respond to cyberattacks." About the Author Chris Riotta is a staff writer at FCW covering government procurement and technology policy. Chris joined FCW after covering U.S. politics for three years at The Independent. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president.
7
Apple to Scan iPhones for Child Sex Abuse Images
"Regardless of what Apple's long term plans are, they've sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users' phones for prohibited content," Matthew Green, a security researcher at Johns Hopkins University, said.
Apple has unveiled a system designed to scan U.S. customers' iPhones to determine if they contain child sexual abuse material (CSAM). The system compares photo files on each handset to a database of known CSAM gathered by the National Center for Missing and Exploited Children and other organizations. Before an iPhone can be used to upload an image to the iCloud Photos platform, the technology will look for matches to known CSAM; matches are evaluated by a human reviewer, who reports confirmed matches to law enforcement. The company said the system's privacy benefits are significantly better than existing techniques, because Apple only learns about users' images if their iCloud Photos accounts contain collections of known CSAM.
[]
[]
[]
scitechnews
None
None
None
None
Apple has unveiled a system designed to scan U.S. customers' iPhones to determine if they contain child sexual abuse material (CSAM). The system compares photo files on each handset to a database of known CSAM gathered by the National Center for Missing and Exploited Children and other organizations. Before an iPhone can be used to upload an image to the iCloud Photos platform, the technology will look for matches to known CSAM; matches are evaluated by a human reviewer, who reports confirmed matches to law enforcement. The company said the system's privacy benefits are significantly better than existing techniques, because Apple only learns about users' images if their iCloud Photos accounts contain collections of known CSAM. "Regardless of what Apple's long term plans are, they've sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users' phones for prohibited content," Matthew Green, a security researcher at Johns Hopkins University, said.
8
Information Transfer Protocol Reaches Quantum Speed Limit
Even though quantum computers are a young technology and aren't yet ready for routine practical use, researchers have already been investigating the theoretical constraints that will bound quantum technologies. One of the things researchers have discovered is that there are limits to how quickly quantum information can race across any quantum device. These speed limits are called Lieb-Robinson bounds, and, for several years, some of the bounds have taunted researchers: For certain tasks, there was a gap between the best speeds allowed by theory and the speeds possible with the best algorithms anyone had designed. It's as though no car manufacturer could figure out how to make a model that reached the local highway limit. But unlike speed limits on roadways, information speed limits can't be ignored when you're in a hurry - they are the inevitable results of the fundamental laws of physics. For any quantum task, there is a limit to how quickly interactions can make their influence felt (and thus transfer information) a certain distance away. The underlying rules define the best performance that is possible. In this way, information speed limits are more like the max score on an old school arcade game than traffic laws, and achieving the ultimate score is an alluring prize for scientists. Now a team of researchers, led by JQI Fellow Alexey Gorshkov, have found a quantum protocol that reaches the theoretical speed limits for certain quantum tasks. Their result provides new insight into designing optimal quantum algorithms and proves that there hasn't been a lower, undiscovered limit thwarting attempts to make better designs. Gorshkov, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS) and a physicist at the National Institute of Standards and Technology , and his colleagues presented their new protocol in a recent article published in the journal Physical Review X . "This gap between maximum speeds and achievable speeds had been bugging us, because we didn't know whether it was the bound that was loose, or if we weren't smart enough to improve the protocol," says Minh Tran, a JQI and QuICS graduate student who was the lead author on the article. "We actually weren't expecting this proposal to be this powerful. And we were trying a lot to improve the bound - turns out that wasn't possible. So, we're excited about this result." Unsurprisingly, the theoretical speed limit for sending information in a quantum device (such as a quantum computer) depends on the device's underlying structure. The new protocol is designed for quantum devices where the basic building blocks - qubits - influence each other even when they aren't right next to each other. In particular, the team designed the protocol for qubits that have interactions that weaken as the distance between them grows. The new protocol works for a range of interactions that don't weaken too rapidly, which covers the interactions in many practical building blocks of quantum technologies, including nitrogen-vacancy centers, Rydberg atoms, polar molecules and trapped ions. Crucially, the protocol can transfer information contained in an unknown quantum state to a distant qubit, an essential feature for achieving many of the advantages promised by quantum computers. This limits the way information can be transferred and rules out some direct approaches, like just creating a copy of the information at the new location. (That requires knowing the quantum state you are transferring.) In the new protocol, data stored on one qubit is shared with its neighbors, using a phenomenon called quantum entanglement . Then, since all those qubits help carry the information, they work together to spread it to other sets of qubits. Because more qubits are involved, they transfer the information even more quickly. This process can be repeated to keep generating larger blocks of qubits that pass the information faster and faster. So instead of the straightforward method of qubits passing information one by one like a basketball team passing the ball down the court, the qubits are more like snowflakes that combine into a larger and more rapidly rolling snowball at each step. And the bigger the snowball, the more flakes stick with each revolution. But that's maybe where the similarities to snowballs end. Unlike a real snowball, the quantum collection can also unroll itself. The information is left on the distant qubit when the process runs in reverse, returning all the other qubits to their original states. When the researchers analyzed the process, they found that the snowballing qubits speed along the information at the theoretical limits allowed by physics. Since the protocol reaches the previously proven limit, no future protocol should be able to surpass it. "The new aspect is the way we entangle two blocks of qubits," Tran says. "Previously, there was a protocol that entangled information into one block and then tried to merge the qubits from the second block into it one by one. But now because we also entangle the qubits in the second block before merging it into the first block, the enhancement will be greater." The protocol is the result of the team exploring the possibility of simultaneously moving information stored on multiple qubits. They realized that using blocks of qubits to move information would enhance a protocol's speed. "On the practical side, the protocol allows us to not only propagate information, but also entangle particles faster," Tran says. "And we know that using entangled particles you can do a lot of interesting things like measuring and sensing with a higher accuracy. And moving information fast also means that you can process information faster. There's a lot of other bottlenecks in building quantum computers, but at least on the fundamental limits side, we know what's possible and what's not." In addition to the theoretical insights and possible technological applications, the team's mathematical results also reveal new information about how large a quantum computation needs to be in order to simulate particles with interactions like those of the qubits in the new protocol. The researchers are hoping to explore the limits of other kinds of interactions and to explore additional aspects of the protocol such as how robust it is against noise disrupting the process. Story by Bailey Bedford In addition to Gorshkov and Tran, co-authors of the research paper include JQI and QuICS graduate student Abhinav Deshpande, JQI and QuICS graduate student Andrew Y. Guo, and University of Colorado Boulder Professor of Physics Andrew Lucas.
Joint Quantum Institute (JQI) scientists have developed a quantum information transfer protocol that reaches theoretical speed limits for some quantum operations. The protocol is engineered for quantum devices in which interactions between quantum bits (qubits) weaken as they recede from each other, covering a range of interactions that do not weaken too quickly. The protocol can deliver many of quantum computer's promised benefits by transferring data within an unknown quantum state to a distant qubit. Data stored on one qubit is shared with its neighbors via quantum entanglement, and the qubits cooperate to spread it to other sets of qubits, accelerating the transfer as more sets are involved. JQI's Minh Tran said, "Moving information fast also means that you can process information faster."
[]
[]
[]
scitechnews
None
None
None
None
Joint Quantum Institute (JQI) scientists have developed a quantum information transfer protocol that reaches theoretical speed limits for some quantum operations. The protocol is engineered for quantum devices in which interactions between quantum bits (qubits) weaken as they recede from each other, covering a range of interactions that do not weaken too quickly. The protocol can deliver many of quantum computer's promised benefits by transferring data within an unknown quantum state to a distant qubit. Data stored on one qubit is shared with its neighbors via quantum entanglement, and the qubits cooperate to spread it to other sets of qubits, accelerating the transfer as more sets are involved. JQI's Minh Tran said, "Moving information fast also means that you can process information faster." Even though quantum computers are a young technology and aren't yet ready for routine practical use, researchers have already been investigating the theoretical constraints that will bound quantum technologies. One of the things researchers have discovered is that there are limits to how quickly quantum information can race across any quantum device. These speed limits are called Lieb-Robinson bounds, and, for several years, some of the bounds have taunted researchers: For certain tasks, there was a gap between the best speeds allowed by theory and the speeds possible with the best algorithms anyone had designed. It's as though no car manufacturer could figure out how to make a model that reached the local highway limit. But unlike speed limits on roadways, information speed limits can't be ignored when you're in a hurry - they are the inevitable results of the fundamental laws of physics. For any quantum task, there is a limit to how quickly interactions can make their influence felt (and thus transfer information) a certain distance away. The underlying rules define the best performance that is possible. In this way, information speed limits are more like the max score on an old school arcade game than traffic laws, and achieving the ultimate score is an alluring prize for scientists. Now a team of researchers, led by JQI Fellow Alexey Gorshkov, have found a quantum protocol that reaches the theoretical speed limits for certain quantum tasks. Their result provides new insight into designing optimal quantum algorithms and proves that there hasn't been a lower, undiscovered limit thwarting attempts to make better designs. Gorshkov, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS) and a physicist at the National Institute of Standards and Technology , and his colleagues presented their new protocol in a recent article published in the journal Physical Review X . "This gap between maximum speeds and achievable speeds had been bugging us, because we didn't know whether it was the bound that was loose, or if we weren't smart enough to improve the protocol," says Minh Tran, a JQI and QuICS graduate student who was the lead author on the article. "We actually weren't expecting this proposal to be this powerful. And we were trying a lot to improve the bound - turns out that wasn't possible. So, we're excited about this result." Unsurprisingly, the theoretical speed limit for sending information in a quantum device (such as a quantum computer) depends on the device's underlying structure. The new protocol is designed for quantum devices where the basic building blocks - qubits - influence each other even when they aren't right next to each other. In particular, the team designed the protocol for qubits that have interactions that weaken as the distance between them grows. The new protocol works for a range of interactions that don't weaken too rapidly, which covers the interactions in many practical building blocks of quantum technologies, including nitrogen-vacancy centers, Rydberg atoms, polar molecules and trapped ions. Crucially, the protocol can transfer information contained in an unknown quantum state to a distant qubit, an essential feature for achieving many of the advantages promised by quantum computers. This limits the way information can be transferred and rules out some direct approaches, like just creating a copy of the information at the new location. (That requires knowing the quantum state you are transferring.) In the new protocol, data stored on one qubit is shared with its neighbors, using a phenomenon called quantum entanglement . Then, since all those qubits help carry the information, they work together to spread it to other sets of qubits. Because more qubits are involved, they transfer the information even more quickly. This process can be repeated to keep generating larger blocks of qubits that pass the information faster and faster. So instead of the straightforward method of qubits passing information one by one like a basketball team passing the ball down the court, the qubits are more like snowflakes that combine into a larger and more rapidly rolling snowball at each step. And the bigger the snowball, the more flakes stick with each revolution. But that's maybe where the similarities to snowballs end. Unlike a real snowball, the quantum collection can also unroll itself. The information is left on the distant qubit when the process runs in reverse, returning all the other qubits to their original states. When the researchers analyzed the process, they found that the snowballing qubits speed along the information at the theoretical limits allowed by physics. Since the protocol reaches the previously proven limit, no future protocol should be able to surpass it. "The new aspect is the way we entangle two blocks of qubits," Tran says. "Previously, there was a protocol that entangled information into one block and then tried to merge the qubits from the second block into it one by one. But now because we also entangle the qubits in the second block before merging it into the first block, the enhancement will be greater." The protocol is the result of the team exploring the possibility of simultaneously moving information stored on multiple qubits. They realized that using blocks of qubits to move information would enhance a protocol's speed. "On the practical side, the protocol allows us to not only propagate information, but also entangle particles faster," Tran says. "And we know that using entangled particles you can do a lot of interesting things like measuring and sensing with a higher accuracy. And moving information fast also means that you can process information faster. There's a lot of other bottlenecks in building quantum computers, but at least on the fundamental limits side, we know what's possible and what's not." In addition to the theoretical insights and possible technological applications, the team's mathematical results also reveal new information about how large a quantum computation needs to be in order to simulate particles with interactions like those of the qubits in the new protocol. The researchers are hoping to explore the limits of other kinds of interactions and to explore additional aspects of the protocol such as how robust it is against noise disrupting the process. Story by Bailey Bedford In addition to Gorshkov and Tran, co-authors of the research paper include JQI and QuICS graduate student Abhinav Deshpande, JQI and QuICS graduate student Andrew Y. Guo, and University of Colorado Boulder Professor of Physics Andrew Lucas.
10
Security Flaws Found in Popular EV Chargers
Image Credits: Getty Images U.K. cybersecurity company Pen Test Partners has identified several vulnerabilities in six home electric vehicle charging brands and a large public EV charging network. While the charger manufacturers resolved most of the issues, the findings are the latest example of the poorly regulated world of Internet of Things devices, which are poised to become all but ubiquitous in our homes and vehicles. Vulnerabilities were identified in five different EV charging brands - Project EV, Wallbox, EVBox, EO Charging's EO Hub and EO mini pro 2, and Hypervolt - and public charging network Chargepoint. They also examined Rolec, but found no vulnerabilities. Security researcher Vangelis Stykas identified several security flaws among the various brands that could have allowed a malicious hacker to hijack user accounts, impede charging and even turn one of the chargers into a "backdoor" into the owner's home network. The consequences of a hack to a public charging station network could include theft of electricity at the expense of driver accounts and turning chargers on or off. Some EV chargers, like Wallbox and Hypervolt, used a Raspberry Pi compute module, a low-cost computer that's often used by hobbyists and programmers. "The Pi is a great hobbyist and educational computing platform, but in our opinion it's not suitable for commercial applications as it doesn't have what's known as a 'secure bootloader,'" Pen Test Partners founder Ken Munro told TechCrunch. "This means anyone with physical access to the outside of your home (hence to your charger) could open it up and steal your Wi-Fi credentials. Yes, the risk is low, but I don't think charger vendors should be exposing us to additional risk," he said. The hacks are "really fairly simple," Munro said. "I can teach you to do this in five minutes," he added. The company's report, published this past weekend , touched on vulnerabilities associated with emerging protocols like the Open Charge Point Interface, maintained and managed by the EVRoaming Foundation. The protocol was designed to make charging seamless between different charging networks and operators. Munro likened it to roaming on a cell phone, allowing drivers to use networks outside of their usual charging network. OCPI isn't widely used at the moment, so these vulnerabilities could be designed out of the protocol. But if left unaddressed, it could mean "that a vulnerability in one platform potentially creates a vulnerability in another," Stykas explained. Hacks to charging stations have become a particularly nefarious threat as a greater share of transportation becomes electrified and more power flows through the electric grid. Electric grids are not designed for large swings in power consumption - but that's exactly what could happen, should there be a large hack that turned on or off a sufficient number of DC fast chargers. "It doesn't take that much to trip the power grid to overload," Munro said. "We've inadvertently made a cyberweapon that others could use against us." While the effects on the electric grid are unique to EV chargers, cybersecurity issues aren't. The routine hacks reveal more endemic issues in IoT devices, where being first to market often takes precedence over sound security - and where regulators are barely able to catch up to the pace of innovation. "There's really not a lot of enforcement," Justin Brookman, the director of consumer privacy and technology policy for Consumer Reports, told TechCrunch in a recent interview. Data security enforcement in the United States falls within the purview of the Federal Trade Commission. But while there is a general-purpose consumer protection statute on the books, "it may well be illegal to build a system that has poor security, it's just whether you're going to get enforced against or not," said Brookman. A separate federal bill, the Internet of Things Cybersecurity Improvement Act, passed last September but only broadly applies to the federal government. There's only slightly more movement on the state level. In 2018, California passed a bill banning default passwords in new consumer electronics starting in 2020 - useful progress to be sure, but which largely puts the burden of data security in the hands of consumers. California, as well as states like Colorado and Virginia, also have passed laws requiring reasonable security measures for IoT devices. Such laws are a good start. But (for better or worse) the FTC isn't like the U.S. Food and Drug Administration, which audits consumer products before they hit the market. As of now, there's no security check on technology devices prior to them reaching consumers. Over in the United Kingdom, "it's the Wild West over here as well, right now," Munro said. Some startups have emerged that are trying to tackle this issue. One is Thistle Technologies , which is trying to help IoT device manufacturers integrate mechanisms into their software to receive security updates. But it's unlikely this problem will be fully solved on the back of private industry alone. Because EV chargers could pose a unique threat to the electric grid, there's a possibility that EV chargers could fall under the scope of a critical infrastructure bill. Last week, President Joe Biden released a memorandum calling for greater cybersecurity for systems related to critical infrastructure. "The degradation, destruction or malfunction of systems that control this infrastructure could cause significant harm to the national and economic security of the United States," Biden said. Whether this will trickle down to consumer products is another question. Correction: The article has been updated to note that the researchers found no vulnerabilities in the Rolec home EV charger. The first paragraph was clarified after an earlier editing error.
Analysts at U.K. cybersecurity firm Pen Test Partners have identified flaws in the application programming interfaces of six home electric vehicle (EV) charging brands, as well as the Chargepoint public EV charging station network. Pen Test analyst Vangelis Stykas found several vulnerabilities that could enable hackers to commandeer user accounts, hinder charging, and repurpose a charger as a backdoor into the owner's home network. The Chargepoint flaw, meanwhile, could let hackers steal electricity and shift the cost to driver accounts, and activate or deactivate chargers. Some EV chargers use a Raspberry Pi compute module, a popular low-cost computer that Pen Test's Ken Munro said is unsuitable for commercial applications due to its lack of a secure bootloader. Charger manufacturers have corrected most of the issues, but the flaws' existence highlights the poor regulation of Internet of Things devices.
[]
[]
[]
scitechnews
None
None
None
None
Analysts at U.K. cybersecurity firm Pen Test Partners have identified flaws in the application programming interfaces of six home electric vehicle (EV) charging brands, as well as the Chargepoint public EV charging station network. Pen Test analyst Vangelis Stykas found several vulnerabilities that could enable hackers to commandeer user accounts, hinder charging, and repurpose a charger as a backdoor into the owner's home network. The Chargepoint flaw, meanwhile, could let hackers steal electricity and shift the cost to driver accounts, and activate or deactivate chargers. Some EV chargers use a Raspberry Pi compute module, a popular low-cost computer that Pen Test's Ken Munro said is unsuitable for commercial applications due to its lack of a secure bootloader. Charger manufacturers have corrected most of the issues, but the flaws' existence highlights the poor regulation of Internet of Things devices. Image Credits: Getty Images U.K. cybersecurity company Pen Test Partners has identified several vulnerabilities in six home electric vehicle charging brands and a large public EV charging network. While the charger manufacturers resolved most of the issues, the findings are the latest example of the poorly regulated world of Internet of Things devices, which are poised to become all but ubiquitous in our homes and vehicles. Vulnerabilities were identified in five different EV charging brands - Project EV, Wallbox, EVBox, EO Charging's EO Hub and EO mini pro 2, and Hypervolt - and public charging network Chargepoint. They also examined Rolec, but found no vulnerabilities. Security researcher Vangelis Stykas identified several security flaws among the various brands that could have allowed a malicious hacker to hijack user accounts, impede charging and even turn one of the chargers into a "backdoor" into the owner's home network. The consequences of a hack to a public charging station network could include theft of electricity at the expense of driver accounts and turning chargers on or off. Some EV chargers, like Wallbox and Hypervolt, used a Raspberry Pi compute module, a low-cost computer that's often used by hobbyists and programmers. "The Pi is a great hobbyist and educational computing platform, but in our opinion it's not suitable for commercial applications as it doesn't have what's known as a 'secure bootloader,'" Pen Test Partners founder Ken Munro told TechCrunch. "This means anyone with physical access to the outside of your home (hence to your charger) could open it up and steal your Wi-Fi credentials. Yes, the risk is low, but I don't think charger vendors should be exposing us to additional risk," he said. The hacks are "really fairly simple," Munro said. "I can teach you to do this in five minutes," he added. The company's report, published this past weekend , touched on vulnerabilities associated with emerging protocols like the Open Charge Point Interface, maintained and managed by the EVRoaming Foundation. The protocol was designed to make charging seamless between different charging networks and operators. Munro likened it to roaming on a cell phone, allowing drivers to use networks outside of their usual charging network. OCPI isn't widely used at the moment, so these vulnerabilities could be designed out of the protocol. But if left unaddressed, it could mean "that a vulnerability in one platform potentially creates a vulnerability in another," Stykas explained. Hacks to charging stations have become a particularly nefarious threat as a greater share of transportation becomes electrified and more power flows through the electric grid. Electric grids are not designed for large swings in power consumption - but that's exactly what could happen, should there be a large hack that turned on or off a sufficient number of DC fast chargers. "It doesn't take that much to trip the power grid to overload," Munro said. "We've inadvertently made a cyberweapon that others could use against us." While the effects on the electric grid are unique to EV chargers, cybersecurity issues aren't. The routine hacks reveal more endemic issues in IoT devices, where being first to market often takes precedence over sound security - and where regulators are barely able to catch up to the pace of innovation. "There's really not a lot of enforcement," Justin Brookman, the director of consumer privacy and technology policy for Consumer Reports, told TechCrunch in a recent interview. Data security enforcement in the United States falls within the purview of the Federal Trade Commission. But while there is a general-purpose consumer protection statute on the books, "it may well be illegal to build a system that has poor security, it's just whether you're going to get enforced against or not," said Brookman. A separate federal bill, the Internet of Things Cybersecurity Improvement Act, passed last September but only broadly applies to the federal government. There's only slightly more movement on the state level. In 2018, California passed a bill banning default passwords in new consumer electronics starting in 2020 - useful progress to be sure, but which largely puts the burden of data security in the hands of consumers. California, as well as states like Colorado and Virginia, also have passed laws requiring reasonable security measures for IoT devices. Such laws are a good start. But (for better or worse) the FTC isn't like the U.S. Food and Drug Administration, which audits consumer products before they hit the market. As of now, there's no security check on technology devices prior to them reaching consumers. Over in the United Kingdom, "it's the Wild West over here as well, right now," Munro said. Some startups have emerged that are trying to tackle this issue. One is Thistle Technologies , which is trying to help IoT device manufacturers integrate mechanisms into their software to receive security updates. But it's unlikely this problem will be fully solved on the back of private industry alone. Because EV chargers could pose a unique threat to the electric grid, there's a possibility that EV chargers could fall under the scope of a critical infrastructure bill. Last week, President Joe Biden released a memorandum calling for greater cybersecurity for systems related to critical infrastructure. "The degradation, destruction or malfunction of systems that control this infrastructure could cause significant harm to the national and economic security of the United States," Biden said. Whether this will trickle down to consumer products is another question. Correction: The article has been updated to note that the researchers found no vulnerabilities in the Rolec home EV charger. The first paragraph was clarified after an earlier editing error.
11
ForCE Model Accurately Predicts How Coasts Will Be Impacted by Storms, Sea-Level Rise
Coastal communities across the world are increasingly facing up to the huge threats posed by a combination of extreme storms and predicted rises in sea levels as a result of global climate change. However, scientists at the University of Plymouth have developed a simple algorithm-based model which accurately predicts how coastlines could be affected and - as a result - enables communities to identify the actions they might need to take in order to adapt. The Forecasting Coastal Evolution (ForCE) model has the potential to be a game-changing advance in coastal evolution science, allowing adaptations in the shoreline to be predicted over timescales of anything from days to decades and beyond. This broad range of timescales means that the model is capable of predicting both the short-term impact of violent storm or storm sequences (over days to years), as well as predicting the much longer-term evolution of the coast due to forecasted rising sea levels (decades). The computer model uses past and present beach measurements, and data showing the physical properties of the coast, to forecast how they might evolve in the future and assess the resilience of our coastlines to erosion and flooding. Unlike previous simple models of its kind that attempt forecasts on similar timescales, ForCE also considers other key factors like tidal, surge and global sea-level rise data to assess how beaches might be impacted by predicted climate change. Beach sediments form our frontline of defense against coastal erosion and flooding, preventing damage to our valuable coastal infrastructure. So coastal managers are rightly concerned about monitoring the volume of beach sediment on our beaches. The new ForCE model opens the door for managers to keeping track of the 'health' of our beaches without leaving their office and to predict how this might change in a future of rising sea level and changing waves. Model predictions have shown to be more than 80% accurate in current tests, based on measurements of beach change at Perranporth, on the north coast of Cornwall in South West England. It has also been show to accurately predict the formation and location of offshore sand bars in response to extreme storms, and how beaches recover in the months and years after storm events. As such, researchers say it could provide an early warning for coastal erosion and potential overtopping, but its stability and efficiency suggests it could forecast coastal evolution over much longer timescales. The study, published in Coastal Engineering, highlights that the increasing threats posed by sea-level rise and coastal squeeze has meant that tracking the morphological evolution of sedimentary coasts is of substantial and increasing societal importance. Dr. Mark Davidson, Associate Professor in Coastal Processes, developed the ForCE model having previously pioneered a traffic light system based on the severity of approaching storms to highlight the level of action required to protect particular beaches. He said: "Top level coastal managers around the world have recognized a real need to assess the resilience of our coastlines in a climate of changing waves and sea level. However, until now they have not had the essential tools that are required to make this assessment. We hope that our work with the ForCE model will be a significant step towards providing this new and essential capability." The University of Plymouth is one of the world's leading authorities in coastal engineering and change in the face of extreme storms and sea-level rise. Researchers from the University's Coastal Processes Research Group have examined their effects everywhere from the coasts of South West England to remote islands in the Pacific Ocean. They have shown the winter storms of 2013/14 were the most energetic to hit the Atlantic coast of western Europe since records began in 1948, and demonstrated that five years after those storms, many beaches had still not fully recovered. Researchers from the University of Plymouth have been carrying out beach measurements at Perranporth in North Cornwall for more than a decade. Recently, this has been done as part of the £4million BLUE-coast project, funded by the Natural Environment Research Council, which aims to address the importance of sediment budgets and their role in coastal recovery. Surveys have shown that following extreme storms, such as those which hit the UK in 2013/14, beaches recovered to some degree in the summer months but that recovery was largely wiped out in the following winters. That has created a situation where high water shorelines are further landward at sites such as Perranporth. Sea level is presently forecast to rise by about 0.5m over the next 100 years. However, there is large uncertainty attached to this and it could easily be more than 1m over the same time-frame. If the latter proves to be true, prominent structures on the coastline - such as the Watering Hole bar - will be under severe threat within the next 60 years. Reference: "Forecasting coastal evolution on time-scales of days to decades" by Mark Davidson, 10 June 2021, Coastal Engineering . DOI: 10.1016/j.coastaleng.2021.103928
An algorithm-based model developed by researchers at the University of Plymouth in the U.K. predicts the impact of storms and rising sea levels on coastlines with greater than 80% accuracy. The Forecasting Coastal Evolution (ForCE) model can predict the evolution of coastlines and assess their resilience to erosion and flooding using past and present beach measurements, data on coastlines' physical properties, and tidal, surge, and global sea-level rise data. The model can predict short-term impacts over days to years, as well as longer-term coastal evolution over decades. Said Plymouth's Mark Davidson, who developed the model, "Top level coastal managers around the world have recognized a real need to assess the resilience of our coastlines in a climate of changing waves and sea level. ...We hope that our work with the ForCE model will be a significant step towards providing this new and essential capability."
[]
[]
[]
scitechnews
None
None
None
None
An algorithm-based model developed by researchers at the University of Plymouth in the U.K. predicts the impact of storms and rising sea levels on coastlines with greater than 80% accuracy. The Forecasting Coastal Evolution (ForCE) model can predict the evolution of coastlines and assess their resilience to erosion and flooding using past and present beach measurements, data on coastlines' physical properties, and tidal, surge, and global sea-level rise data. The model can predict short-term impacts over days to years, as well as longer-term coastal evolution over decades. Said Plymouth's Mark Davidson, who developed the model, "Top level coastal managers around the world have recognized a real need to assess the resilience of our coastlines in a climate of changing waves and sea level. ...We hope that our work with the ForCE model will be a significant step towards providing this new and essential capability." Coastal communities across the world are increasingly facing up to the huge threats posed by a combination of extreme storms and predicted rises in sea levels as a result of global climate change. However, scientists at the University of Plymouth have developed a simple algorithm-based model which accurately predicts how coastlines could be affected and - as a result - enables communities to identify the actions they might need to take in order to adapt. The Forecasting Coastal Evolution (ForCE) model has the potential to be a game-changing advance in coastal evolution science, allowing adaptations in the shoreline to be predicted over timescales of anything from days to decades and beyond. This broad range of timescales means that the model is capable of predicting both the short-term impact of violent storm or storm sequences (over days to years), as well as predicting the much longer-term evolution of the coast due to forecasted rising sea levels (decades). The computer model uses past and present beach measurements, and data showing the physical properties of the coast, to forecast how they might evolve in the future and assess the resilience of our coastlines to erosion and flooding. Unlike previous simple models of its kind that attempt forecasts on similar timescales, ForCE also considers other key factors like tidal, surge and global sea-level rise data to assess how beaches might be impacted by predicted climate change. Beach sediments form our frontline of defense against coastal erosion and flooding, preventing damage to our valuable coastal infrastructure. So coastal managers are rightly concerned about monitoring the volume of beach sediment on our beaches. The new ForCE model opens the door for managers to keeping track of the 'health' of our beaches without leaving their office and to predict how this might change in a future of rising sea level and changing waves. Model predictions have shown to be more than 80% accurate in current tests, based on measurements of beach change at Perranporth, on the north coast of Cornwall in South West England. It has also been show to accurately predict the formation and location of offshore sand bars in response to extreme storms, and how beaches recover in the months and years after storm events. As such, researchers say it could provide an early warning for coastal erosion and potential overtopping, but its stability and efficiency suggests it could forecast coastal evolution over much longer timescales. The study, published in Coastal Engineering, highlights that the increasing threats posed by sea-level rise and coastal squeeze has meant that tracking the morphological evolution of sedimentary coasts is of substantial and increasing societal importance. Dr. Mark Davidson, Associate Professor in Coastal Processes, developed the ForCE model having previously pioneered a traffic light system based on the severity of approaching storms to highlight the level of action required to protect particular beaches. He said: "Top level coastal managers around the world have recognized a real need to assess the resilience of our coastlines in a climate of changing waves and sea level. However, until now they have not had the essential tools that are required to make this assessment. We hope that our work with the ForCE model will be a significant step towards providing this new and essential capability." The University of Plymouth is one of the world's leading authorities in coastal engineering and change in the face of extreme storms and sea-level rise. Researchers from the University's Coastal Processes Research Group have examined their effects everywhere from the coasts of South West England to remote islands in the Pacific Ocean. They have shown the winter storms of 2013/14 were the most energetic to hit the Atlantic coast of western Europe since records began in 1948, and demonstrated that five years after those storms, many beaches had still not fully recovered. Researchers from the University of Plymouth have been carrying out beach measurements at Perranporth in North Cornwall for more than a decade. Recently, this has been done as part of the £4million BLUE-coast project, funded by the Natural Environment Research Council, which aims to address the importance of sediment budgets and their role in coastal recovery. Surveys have shown that following extreme storms, such as those which hit the UK in 2013/14, beaches recovered to some degree in the summer months but that recovery was largely wiped out in the following winters. That has created a situation where high water shorelines are further landward at sites such as Perranporth. Sea level is presently forecast to rise by about 0.5m over the next 100 years. However, there is large uncertainty attached to this and it could easily be more than 1m over the same time-frame. If the latter proves to be true, prominent structures on the coastline - such as the Watering Hole bar - will be under severe threat within the next 60 years. Reference: "Forecasting coastal evolution on time-scales of days to decades" by Mark Davidson, 10 June 2021, Coastal Engineering . DOI: 10.1016/j.coastaleng.2021.103928
12
AI Algorithm to Assess Metastatic Potential in Skin Cancers
DALLAS - August 3, 2021 - Using artificial intelligence (AI), researchers from UT Southwestern have developed a way to accurately predict which skin cancers are highly metastatic. The findings , published as the July cover article of Cell Systems, show the potential for AI-based tools to revolutionize pathology for cancer and a variety of other diseases. "We now have a general framework that allows us to take tissue samples and predict mechanisms inside cells that drive disease, mechanisms that are currently inaccessible in any other way," said study leader Gaudenz Danuser, Ph.D. , Professor and Chair of the Lyda Hill Department of Bioinformatics at UTSW. AI technology has significantly advanced over the past several years, Dr. Danuser explained, with deep learning-based methods able to distinguish minute differences in images that are essentially invisible to the human eye. Researchers have proposed using this latent information to look for differences in disease characteristics that could offer insight on prognoses or guide treatments. However, he said, the differences distinguished by AI are generally not interpretable in terms of specific cellular characteristics - a drawback that has made AI a tough sell for clinical use. To overcome this challenge, Dr. Danuser and his colleagues used AI to search for differences between images of melanoma cells with high and low metastatic potential - a characteristic that can mean life or death for patients with skin cancer - and then reverse-engineered their findings to figure out which features in these images were responsible for the differences. Using tumor samples from seven patients and available information on their disease progression, including metastasis, the researchers took videos of about 12,000 random cells living in petri dishes, generating about 1,700,000 raw images. The researchers then used an AI algorithm to pull 56 different abstract numerical features from these images. Dr. Danuser and his colleagues found one feature that was able to accurately discriminate between cells with high and low metastatic potential. By manipulating this abstract numerical feature, they produced artificial images that exaggerated visible characteristics inherent to metastasis that human eyes cannot detect, he added. The highly metastatic cells produced slightly more pseudopodial extensions - a type of fingerlike projection - and had increased light scattering, an effect that may be due to subtle rearrangements of cellular organelles. To further prove the utility of this tool, the researchers first classified the metastatic potential of cells from human melanomas that had been frozen and cultured in petri dishes for 30 years, and then implanted them into mice. Those predicted to be highly metastatic formed tumors that readily spread throughout the animals, while those predicted to have low metastatic potential spread little or not at all. Dr. Danuser, a Professor of Cell Biology and member of the Harold C. Simmons Comprehensive Cancer Center , noted that this method needs further study before it becomes part of clinical care. But eventually, he added, it may be possible to use AI to distinguish important features of cancers and other diseases. Dr. Danuser is the Patrick E. Haggerty Distinguished Chair in Basic Biomedical Science at UTSW. Other UTSW researchers who contributed to this study include Assaf Zaritsky, Andrew R. Jamieson, Erik S. Welf, Andres Nevarez, Justin Cillay, Ugur Eskiocak, and Brandi L. Cantarel. This study was funded by grants from the Cancer Prevention and Research Institute of Texas (CPRIT R160622), the National Institutes of Health (R35GM126428, K25CA204526), and the Israeli Council for Higher Education via the Data Science Research Center, Ben-Gurion University of the Negev, Israel. About UT Southwestern Medical Center UT Southwestern, one of the nation's premier academic medical centers, integrates pioneering biomedical research with exceptional clinical care and education. The institution's faculty has received six Nobel Prizes, and includes 25 members of the National Academy of Sciences, 16 members of the National Academy of Medicine, and 13 Howard Hughes Medical Institute Investigators. The full-time faculty of more than 2,800 is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UT Southwestern physicians provide care in about 80 specialties to more than 117,000 hospitalized patients, more than 360,000 emergency room cases, and oversee nearly 3 million outpatient visits a year.
A new artificial intelligence (AI) algorithm can predict highly metastatic skin cancers. The University of Texas Southwestern Medical Center (UTSW) researchers who developed the algorithm used AI to identify differences between images of melanoma cells with high and low metastatic potential, then used reverse engineering to determine which visual features were associated with the difference. They generated 1.7 million raw images from videos of about 12,000 random cells from tumor samples from seven patients. The algorithm identified 56 different abstract numerical features from those images, which the researchers manipulated to generate images exaggerating visible characteristics inherent to metastasis. Said UTSW's Gaudenz Danuser, "We now have a general framework that allows us to take tissue samples and predict mechanisms inside cells that drive disease."
[]
[]
[]
scitechnews
None
None
None
None
A new artificial intelligence (AI) algorithm can predict highly metastatic skin cancers. The University of Texas Southwestern Medical Center (UTSW) researchers who developed the algorithm used AI to identify differences between images of melanoma cells with high and low metastatic potential, then used reverse engineering to determine which visual features were associated with the difference. They generated 1.7 million raw images from videos of about 12,000 random cells from tumor samples from seven patients. The algorithm identified 56 different abstract numerical features from those images, which the researchers manipulated to generate images exaggerating visible characteristics inherent to metastasis. Said UTSW's Gaudenz Danuser, "We now have a general framework that allows us to take tissue samples and predict mechanisms inside cells that drive disease." DALLAS - August 3, 2021 - Using artificial intelligence (AI), researchers from UT Southwestern have developed a way to accurately predict which skin cancers are highly metastatic. The findings , published as the July cover article of Cell Systems, show the potential for AI-based tools to revolutionize pathology for cancer and a variety of other diseases. "We now have a general framework that allows us to take tissue samples and predict mechanisms inside cells that drive disease, mechanisms that are currently inaccessible in any other way," said study leader Gaudenz Danuser, Ph.D. , Professor and Chair of the Lyda Hill Department of Bioinformatics at UTSW. AI technology has significantly advanced over the past several years, Dr. Danuser explained, with deep learning-based methods able to distinguish minute differences in images that are essentially invisible to the human eye. Researchers have proposed using this latent information to look for differences in disease characteristics that could offer insight on prognoses or guide treatments. However, he said, the differences distinguished by AI are generally not interpretable in terms of specific cellular characteristics - a drawback that has made AI a tough sell for clinical use. To overcome this challenge, Dr. Danuser and his colleagues used AI to search for differences between images of melanoma cells with high and low metastatic potential - a characteristic that can mean life or death for patients with skin cancer - and then reverse-engineered their findings to figure out which features in these images were responsible for the differences. Using tumor samples from seven patients and available information on their disease progression, including metastasis, the researchers took videos of about 12,000 random cells living in petri dishes, generating about 1,700,000 raw images. The researchers then used an AI algorithm to pull 56 different abstract numerical features from these images. Dr. Danuser and his colleagues found one feature that was able to accurately discriminate between cells with high and low metastatic potential. By manipulating this abstract numerical feature, they produced artificial images that exaggerated visible characteristics inherent to metastasis that human eyes cannot detect, he added. The highly metastatic cells produced slightly more pseudopodial extensions - a type of fingerlike projection - and had increased light scattering, an effect that may be due to subtle rearrangements of cellular organelles. To further prove the utility of this tool, the researchers first classified the metastatic potential of cells from human melanomas that had been frozen and cultured in petri dishes for 30 years, and then implanted them into mice. Those predicted to be highly metastatic formed tumors that readily spread throughout the animals, while those predicted to have low metastatic potential spread little or not at all. Dr. Danuser, a Professor of Cell Biology and member of the Harold C. Simmons Comprehensive Cancer Center , noted that this method needs further study before it becomes part of clinical care. But eventually, he added, it may be possible to use AI to distinguish important features of cancers and other diseases. Dr. Danuser is the Patrick E. Haggerty Distinguished Chair in Basic Biomedical Science at UTSW. Other UTSW researchers who contributed to this study include Assaf Zaritsky, Andrew R. Jamieson, Erik S. Welf, Andres Nevarez, Justin Cillay, Ugur Eskiocak, and Brandi L. Cantarel. This study was funded by grants from the Cancer Prevention and Research Institute of Texas (CPRIT R160622), the National Institutes of Health (R35GM126428, K25CA204526), and the Israeli Council for Higher Education via the Data Science Research Center, Ben-Gurion University of the Negev, Israel. About UT Southwestern Medical Center UT Southwestern, one of the nation's premier academic medical centers, integrates pioneering biomedical research with exceptional clinical care and education. The institution's faculty has received six Nobel Prizes, and includes 25 members of the National Academy of Sciences, 16 members of the National Academy of Medicine, and 13 Howard Hughes Medical Institute Investigators. The full-time faculty of more than 2,800 is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UT Southwestern physicians provide care in about 80 specialties to more than 117,000 hospitalized patients, more than 360,000 emergency room cases, and oversee nearly 3 million outpatient visits a year.
13
Do You Hear What I Hear? A Cyberattack.
Cybersecurity analysts deal with an enormous amount of data, especially when monitoring network traffic. If one were to print the data in text form, a single day's worth of network traffic may be akin to a thick phonebook. In other words, detecting an abnormality is like finding a needle in a haystack. "It's an ocean of data," says Yang Cai , a senior systems scientist in CyLab. "The important patterns we need to see become buried by a lot of trivial or normal patterns." Cai has been working for years to come up with ways to make abnormalities in network traffic easier to spot. A few years ago, he and his research group developed a data visualization tool that allowed one to see network traffic patterns, and now he has developed a way to hear them. In a new study presented this week at the Conference on Applied Human Factors and Ergonomics , Cai and two co-authors show how cybersecurity data can be heard in the form of music. When there's a change in the network traffic, there is a change in the music. "We wanted to articulate normal and abnormal patterns through music," Cai says. "The process of sonification - using audio to perceptualize data - is not new, but sonification to make data more appealing to the human ear is." The researchers experimented with several different "sound mapping" algorithms, transforming numeral datasets into music with various melodies, harmonies, time signatures, and tempos. For example, the researchers assigned specific notes to the 10 digits that make up any number found in data: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. To represent the third and fourth digits of the mathematical constant Pi - 4 and 1 - they modified the time signature of one measure to 4/4 and the following measure to 1/4. While this all may sound fairly complicated, one doesn't need to be a trained musician to be able to hear these changes in the music, the researchers found. The team created music using network traffic data from a real malware distribution network and presented the music to non-musicians. They found that non-musicians were able to accurately recognize changes in pitch when played on different instruments. "We are not only making music, but turning abstract data into something that humans can process," the authors write in their study. Cai says his vision is that someday, an analyst will be able to explore cybersecurity data with virtual reality goggles presenting the visualization of the network space. When the analyst moves closer to an individual data point, or a cluster of data, music representing that data would gradually become more audible. "The idea is to use all of humans' sensory channels to explore this cyber analytical space," Cai says. While Cai himself is not a trained musician, his two co-authors on the study are. Jakub Polaczyk and Katelyn Croft were once students in Carnegie Mellon University's College of Fine Arts. Polaczyk obtained his Artist Diploma in Composition in 2013 and is currently an award-winning composer based in New York City. Croft obtained her master's degree in harp performance in 2020 and is currently in Taiwan studying the influence of Western music on Asian music. Before graduating in 2020, Croft worked in Cai's lab on a virtual recital project. Polaczyk took Cai's University-wide course, "Creativity," in 2011 and the two have collaborated ever since. "It has been a very nice collaboration," Cai says. "This kind of cross-disciplinary collaboration really exemplifies CMU's strengths." Paper reference Compositional Sonification of Cybersecurity Data in a Baroque Style
Carnegie Mellon University's Yang Cai and colleagues have designed a method of making abnormal network traffic audible by rendering cybersecurity data musically. The researchers explored several sound mapping algorithms, converting numeral datasets into music with diverse melodies, harmonies, time signatures, and tempos. They produced music using network traffic data from an actual malware distribution network, and presented it to non-musicians, who could accurately identify pitch shifts when played on different instruments. Said the researchers, "We are not only making music, but turning abstract data into something that humans can process." Said Cai, "The process of sonification - using audio to perceptualize data - is not new, but sonification to make data more appealing to the human ear is."
[]
[]
[]
scitechnews
None
None
None
None
Carnegie Mellon University's Yang Cai and colleagues have designed a method of making abnormal network traffic audible by rendering cybersecurity data musically. The researchers explored several sound mapping algorithms, converting numeral datasets into music with diverse melodies, harmonies, time signatures, and tempos. They produced music using network traffic data from an actual malware distribution network, and presented it to non-musicians, who could accurately identify pitch shifts when played on different instruments. Said the researchers, "We are not only making music, but turning abstract data into something that humans can process." Said Cai, "The process of sonification - using audio to perceptualize data - is not new, but sonification to make data more appealing to the human ear is." Cybersecurity analysts deal with an enormous amount of data, especially when monitoring network traffic. If one were to print the data in text form, a single day's worth of network traffic may be akin to a thick phonebook. In other words, detecting an abnormality is like finding a needle in a haystack. "It's an ocean of data," says Yang Cai , a senior systems scientist in CyLab. "The important patterns we need to see become buried by a lot of trivial or normal patterns." Cai has been working for years to come up with ways to make abnormalities in network traffic easier to spot. A few years ago, he and his research group developed a data visualization tool that allowed one to see network traffic patterns, and now he has developed a way to hear them. In a new study presented this week at the Conference on Applied Human Factors and Ergonomics , Cai and two co-authors show how cybersecurity data can be heard in the form of music. When there's a change in the network traffic, there is a change in the music. "We wanted to articulate normal and abnormal patterns through music," Cai says. "The process of sonification - using audio to perceptualize data - is not new, but sonification to make data more appealing to the human ear is." The researchers experimented with several different "sound mapping" algorithms, transforming numeral datasets into music with various melodies, harmonies, time signatures, and tempos. For example, the researchers assigned specific notes to the 10 digits that make up any number found in data: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. To represent the third and fourth digits of the mathematical constant Pi - 4 and 1 - they modified the time signature of one measure to 4/4 and the following measure to 1/4. While this all may sound fairly complicated, one doesn't need to be a trained musician to be able to hear these changes in the music, the researchers found. The team created music using network traffic data from a real malware distribution network and presented the music to non-musicians. They found that non-musicians were able to accurately recognize changes in pitch when played on different instruments. "We are not only making music, but turning abstract data into something that humans can process," the authors write in their study. Cai says his vision is that someday, an analyst will be able to explore cybersecurity data with virtual reality goggles presenting the visualization of the network space. When the analyst moves closer to an individual data point, or a cluster of data, music representing that data would gradually become more audible. "The idea is to use all of humans' sensory channels to explore this cyber analytical space," Cai says. While Cai himself is not a trained musician, his two co-authors on the study are. Jakub Polaczyk and Katelyn Croft were once students in Carnegie Mellon University's College of Fine Arts. Polaczyk obtained his Artist Diploma in Composition in 2013 and is currently an award-winning composer based in New York City. Croft obtained her master's degree in harp performance in 2020 and is currently in Taiwan studying the influence of Western music on Asian music. Before graduating in 2020, Croft worked in Cai's lab on a virtual recital project. Polaczyk took Cai's University-wide course, "Creativity," in 2011 and the two have collaborated ever since. "It has been a very nice collaboration," Cai says. "This kind of cross-disciplinary collaboration really exemplifies CMU's strengths." Paper reference Compositional Sonification of Cybersecurity Data in a Baroque Style
15
LLNL Optimizes Flow-Through Electrodes for Electrochemical Reactors with 3D Printing
To take advantage of the growing abundance and cheaper costs of renewable energy, Lawrence Livermore National Laboratory (LLNL) scientists and engineers are 3D printing flow-through electrodes (FTEs), core components of electrochemical reactors used for converting CO 2 and other molecules to useful products. As described in a paper published by the Proceedings of the National Academy of Sciences , LLNL engineers for the first time 3D-printed carbon FTEs - porous electrodes responsible for the reactions in the reactors - from graphene aerogels. By capitalizing on the design freedom afforded by 3D printing, researchers demonstrated they could tailor the flow in FTEs, dramatically improving mass transfer - the transport of liquid or gas reactants through the electrodes and onto the reactive surfaces. The work opens the door to establishing 3D printing as a "viable, versatile rapid-prototyping method" for flow-through electrodes and as a promising pathway to maximizing reactor performance, according to researchers. "At LLNL we are pioneering the use of three-dimensional reactors with precise control over the local reaction environment," said LLNL engineer Victor Beck, the paper's lead author. "Novel, high-performance electrodes will be essential components of next-generation electrochemical reactor architectures. This advancement demonstrates how we can leverage the control that 3D printing capabilities offer over the electrode structure to engineer the local fluid flow and induce complex, inertial flow patterns that improve reactor performance." Through 3D printing, researchers demonstrated that by controlling the electrodes' flow channel geometry, they could optimize electrochemical reactions while minimizing the tradeoffs seen in FTEs made through traditional means. Typical materials used in FTEs are "disordered" media, such as carbon fiber-based foams or felts, limiting opportunities for engineering their microstructure. While cheap to produce, the randomly ordered materials suffer from uneven flow and mass transport distribution, researchers explained. "By 3D printing advanced materials such as carbon aerogels, it is possible to engineer macroporous networks in these material without compromising the physical properties such as electrical conductivity and surface area," said co-author Swetha Chandrasekaran. The team reported the FTEs, printed in lattice structures through a direct ink writing method, enhanced mass transfer over previously reported 3D printed efforts by one to two orders of magnitude, and achieved performance on par with conventional materials. Because the commercial viability and widespread adoption of electrochemical reactors is dependent on attaining greater mass transfer, the ability to engineer flow in FTEs will make the technology a much more attractive option for helping solve the global energy crisis, researchers said. Improving the performance and predictability of 3D-printed electrodes also makes them suitable for use in scaled-up reactors for high efficiency electrochemical converters. "Gaining fine control over electrode geometries will enable advanced electrochemical reactor engineering that wasn't possible with previous generation electrode materials," said co-author Anna Ivanovskaya. "Engineers will be able to design and manufacture structures optimized for specific processes. Potentially, with development of manufacturing technology, 3D-printed electrodes may replace conventional disordered electrodes for both liquid and gas type reactors." LLNL scientists and engineers are currently exploring use of electrochemical reactors across a range of applications, including converting CO 2 to useful fuels and polymers and electrochemical energy storage to enable further deployment of electricity from carbon-free and renewable sources. Researchers said the promising results will allow them to rapidly explore the impact of engineered electrode architectures without expensive industrialized manufacturing techniques. Work is ongoing at LLNL to produce more robust electrodes and reactor components at higher resolutions through light-based 3D polymer printing techniques such as projection micro-stereolithography and two-photon lithography, flowed by metallization. The team also will leverage high performance computing to design better performing structures and continue deploying the 3D-printed electrodes in larger and more complex reactors and full electrochemical cells. Funding for the effort came from the Laboratory Directed Research and Development program. Co-authors included co-principal investigators Sarah Baker, Eric Duoss and Marcus Worsley and LLNL scientist Jean-Baptiste Forien.
Lawrence Livermore National Laboratory (LLNL) scientists three-dimensionally (3D) printed carbon flow-through electrodes (FTEs) for electrochemical reactors from graphene aerogels. The researchers demonstrated the ability to customize FTE flows and drastically enhance reactant transfer from electrodes onto reactive surfaces, optimizing electrochemical reactions. Said LLNL's Swetha Chandrasekaran, "By 3D-printing advanced materials such as carbon aerogels, it is possible to engineer macroporous networks in these materials without compromising the physical properties such as electrical conductivity and surface area." LLNL's Anna Ivanovskaya said the method should enable engineers "to design and manufacture structures optimized for specific processes."
[]
[]
[]
scitechnews
None
None
None
None
Lawrence Livermore National Laboratory (LLNL) scientists three-dimensionally (3D) printed carbon flow-through electrodes (FTEs) for electrochemical reactors from graphene aerogels. The researchers demonstrated the ability to customize FTE flows and drastically enhance reactant transfer from electrodes onto reactive surfaces, optimizing electrochemical reactions. Said LLNL's Swetha Chandrasekaran, "By 3D-printing advanced materials such as carbon aerogels, it is possible to engineer macroporous networks in these materials without compromising the physical properties such as electrical conductivity and surface area." LLNL's Anna Ivanovskaya said the method should enable engineers "to design and manufacture structures optimized for specific processes." To take advantage of the growing abundance and cheaper costs of renewable energy, Lawrence Livermore National Laboratory (LLNL) scientists and engineers are 3D printing flow-through electrodes (FTEs), core components of electrochemical reactors used for converting CO 2 and other molecules to useful products. As described in a paper published by the Proceedings of the National Academy of Sciences , LLNL engineers for the first time 3D-printed carbon FTEs - porous electrodes responsible for the reactions in the reactors - from graphene aerogels. By capitalizing on the design freedom afforded by 3D printing, researchers demonstrated they could tailor the flow in FTEs, dramatically improving mass transfer - the transport of liquid or gas reactants through the electrodes and onto the reactive surfaces. The work opens the door to establishing 3D printing as a "viable, versatile rapid-prototyping method" for flow-through electrodes and as a promising pathway to maximizing reactor performance, according to researchers. "At LLNL we are pioneering the use of three-dimensional reactors with precise control over the local reaction environment," said LLNL engineer Victor Beck, the paper's lead author. "Novel, high-performance electrodes will be essential components of next-generation electrochemical reactor architectures. This advancement demonstrates how we can leverage the control that 3D printing capabilities offer over the electrode structure to engineer the local fluid flow and induce complex, inertial flow patterns that improve reactor performance." Through 3D printing, researchers demonstrated that by controlling the electrodes' flow channel geometry, they could optimize electrochemical reactions while minimizing the tradeoffs seen in FTEs made through traditional means. Typical materials used in FTEs are "disordered" media, such as carbon fiber-based foams or felts, limiting opportunities for engineering their microstructure. While cheap to produce, the randomly ordered materials suffer from uneven flow and mass transport distribution, researchers explained. "By 3D printing advanced materials such as carbon aerogels, it is possible to engineer macroporous networks in these material without compromising the physical properties such as electrical conductivity and surface area," said co-author Swetha Chandrasekaran. The team reported the FTEs, printed in lattice structures through a direct ink writing method, enhanced mass transfer over previously reported 3D printed efforts by one to two orders of magnitude, and achieved performance on par with conventional materials. Because the commercial viability and widespread adoption of electrochemical reactors is dependent on attaining greater mass transfer, the ability to engineer flow in FTEs will make the technology a much more attractive option for helping solve the global energy crisis, researchers said. Improving the performance and predictability of 3D-printed electrodes also makes them suitable for use in scaled-up reactors for high efficiency electrochemical converters. "Gaining fine control over electrode geometries will enable advanced electrochemical reactor engineering that wasn't possible with previous generation electrode materials," said co-author Anna Ivanovskaya. "Engineers will be able to design and manufacture structures optimized for specific processes. Potentially, with development of manufacturing technology, 3D-printed electrodes may replace conventional disordered electrodes for both liquid and gas type reactors." LLNL scientists and engineers are currently exploring use of electrochemical reactors across a range of applications, including converting CO 2 to useful fuels and polymers and electrochemical energy storage to enable further deployment of electricity from carbon-free and renewable sources. Researchers said the promising results will allow them to rapidly explore the impact of engineered electrode architectures without expensive industrialized manufacturing techniques. Work is ongoing at LLNL to produce more robust electrodes and reactor components at higher resolutions through light-based 3D polymer printing techniques such as projection micro-stereolithography and two-photon lithography, flowed by metallization. The team also will leverage high performance computing to design better performing structures and continue deploying the 3D-printed electrodes in larger and more complex reactors and full electrochemical cells. Funding for the effort came from the Laboratory Directed Research and Development program. Co-authors included co-principal investigators Sarah Baker, Eric Duoss and Marcus Worsley and LLNL scientist Jean-Baptiste Forien.
16
Scientists Share Wiring Diagram Tracing Connections for 200,000 Mouse Brain Cells
Neuroscientists from Seattle's Allen Institute and other research institutions have wrapped up a five-year, multimillion-dollar project with the release of a high-resolution 3-D map showing the connections between 200,000 cells in a clump of mouse brain about as big as a grain of sand. The data collection, which is now publicly available online , was developed as part of the Machine Intelligence From Cortical Networks program, or MICrONS for short. MICrONS was funded in 2016 with $100 million in federal grants to the Allen Institute and its partners from the Intelligence Advanced Research Projects Activity , the U.S. intelligence community's equivalent of the Pentagon's DARPA think tank. MICrONS is meant to clear the way for reverse-engineering the structure of the brain to help computer scientists develop more human-like machine learning systems, but the database is likely to benefit biomedical researchers as well. "We're basically treating the brain circuit as a computer, and we asked three questions: What does it do? How is it wired up? What is the program?" R. Clay Reid, senior investigator at the Allen Institute and one of MICrONS' lead scientists, said today in a news release. "Experiments were done to literally see the neurons' activity, to watch them compute." The newly released data set takes in 120,000 neurons plus roughly 80,000 other types of brain cells, all contained in a cubic millimeter of the mouse brain's visual neocortex. In addition to mapping the cells in physical space, the data set traces the functional connections involving more than 523 million synapses. Researchers from the Allen Institute were joined in the project by colleagues from Princeton University, Baylor College of Medicine and other institutions. Baylor's team captured the patterns of neural activity of a mouse as it viewed images or movies of natural scenes. After those experiments, the Allen Institute team preserved the target sample of brain tissue, cut it into more than 27,000 thin slices, and captured 150 million images of those slices using electron microscopes. Princeton's team then used machine learning techniques to turn those images into high-resolution maps of each cell and its internal components. "The reconstructions that we're presenting today let us see the elements of the neural circuit: the brain cells and the wiring, with the ability to follow the wires to map the connections between cells," Reid said. "The final step is to interpret this network, at which point we may be able to say we can read the brain's program." The resulting insights could help computer scientists design better hardware for AI applications, and they could also help medical researchers figure out treatments for brain disorders that involve alterations in cortical wiring. "Our five-year mission had an ambitious goal that many regarded as unattainable," said H. Sebastian Seung, a professor of neuroscience and computer science at Princeton. "Today, we have been rewarded by breathtaking new vistas of the mammalian cortex. As we transition to a new phase of discovery, we are building a community of researchers to use the data in new ways." The data set is hosted online by the Brain Observatory Storage Service & Database , or BossDB, and Amazon Web Services is making it freely accessible on the cloud through its Open Data Sponsorship Program . Google contributed storage and computing engine support through Google Cloud, and the database makes use of Neuroglancer , an open-source visualization tool developed by Google Research. MICrONS' emphasis on open access is in keeping with the principles that Microsoft co-founder Paul Allen championed when he founded the Allen Institute in 2003 . The Allen Institute for Brain Science is the institute's oldest and largest division, and since Allen's death in 2018 , it has sharpened its focus on studies of neural circuitry and brain cell types .
A multi-institutional team of neuroscientists spent five years and $100 million developing a high-resolution model detailing the connections between 200,000 mouse brain cells. Created under the federally-funded Machine Intelligence From Cortical Networks (MICrONS) program, the dataset encompasses 120,000 neurons and about 80,000 other types of brain cells in a cubic millimeter of a mouse brain's visual neocortex. The researchers recorded neural activity patterns as the mouse watched images or films of natural scenes, then captured 150 million images of fractionated brain tissue using electron microscopes. Each cell and its internal structure were mapped using machine learning techniques. R. Clay Reid at Seattle's Allen Institute for Brain Science said, "The final step is to interpret this network, at which point we may be able to say we can read the brain's program."
[]
[]
[]
scitechnews
None
None
None
None
A multi-institutional team of neuroscientists spent five years and $100 million developing a high-resolution model detailing the connections between 200,000 mouse brain cells. Created under the federally-funded Machine Intelligence From Cortical Networks (MICrONS) program, the dataset encompasses 120,000 neurons and about 80,000 other types of brain cells in a cubic millimeter of a mouse brain's visual neocortex. The researchers recorded neural activity patterns as the mouse watched images or films of natural scenes, then captured 150 million images of fractionated brain tissue using electron microscopes. Each cell and its internal structure were mapped using machine learning techniques. R. Clay Reid at Seattle's Allen Institute for Brain Science said, "The final step is to interpret this network, at which point we may be able to say we can read the brain's program." Neuroscientists from Seattle's Allen Institute and other research institutions have wrapped up a five-year, multimillion-dollar project with the release of a high-resolution 3-D map showing the connections between 200,000 cells in a clump of mouse brain about as big as a grain of sand. The data collection, which is now publicly available online , was developed as part of the Machine Intelligence From Cortical Networks program, or MICrONS for short. MICrONS was funded in 2016 with $100 million in federal grants to the Allen Institute and its partners from the Intelligence Advanced Research Projects Activity , the U.S. intelligence community's equivalent of the Pentagon's DARPA think tank. MICrONS is meant to clear the way for reverse-engineering the structure of the brain to help computer scientists develop more human-like machine learning systems, but the database is likely to benefit biomedical researchers as well. "We're basically treating the brain circuit as a computer, and we asked three questions: What does it do? How is it wired up? What is the program?" R. Clay Reid, senior investigator at the Allen Institute and one of MICrONS' lead scientists, said today in a news release. "Experiments were done to literally see the neurons' activity, to watch them compute." The newly released data set takes in 120,000 neurons plus roughly 80,000 other types of brain cells, all contained in a cubic millimeter of the mouse brain's visual neocortex. In addition to mapping the cells in physical space, the data set traces the functional connections involving more than 523 million synapses. Researchers from the Allen Institute were joined in the project by colleagues from Princeton University, Baylor College of Medicine and other institutions. Baylor's team captured the patterns of neural activity of a mouse as it viewed images or movies of natural scenes. After those experiments, the Allen Institute team preserved the target sample of brain tissue, cut it into more than 27,000 thin slices, and captured 150 million images of those slices using electron microscopes. Princeton's team then used machine learning techniques to turn those images into high-resolution maps of each cell and its internal components. "The reconstructions that we're presenting today let us see the elements of the neural circuit: the brain cells and the wiring, with the ability to follow the wires to map the connections between cells," Reid said. "The final step is to interpret this network, at which point we may be able to say we can read the brain's program." The resulting insights could help computer scientists design better hardware for AI applications, and they could also help medical researchers figure out treatments for brain disorders that involve alterations in cortical wiring. "Our five-year mission had an ambitious goal that many regarded as unattainable," said H. Sebastian Seung, a professor of neuroscience and computer science at Princeton. "Today, we have been rewarded by breathtaking new vistas of the mammalian cortex. As we transition to a new phase of discovery, we are building a community of researchers to use the data in new ways." The data set is hosted online by the Brain Observatory Storage Service & Database , or BossDB, and Amazon Web Services is making it freely accessible on the cloud through its Open Data Sponsorship Program . Google contributed storage and computing engine support through Google Cloud, and the database makes use of Neuroglancer , an open-source visualization tool developed by Google Research. MICrONS' emphasis on open access is in keeping with the principles that Microsoft co-founder Paul Allen championed when he founded the Allen Institute in 2003 . The Allen Institute for Brain Science is the institute's oldest and largest division, and since Allen's death in 2018 , it has sharpened its focus on studies of neural circuitry and brain cell types .
17
Census Data Change to Protect Privacy Rattles Researchers, Minority Groups
A plan to protect the confidentiality of Americans' responses to the 2020 census by injecting small, calculated distortions into the results is raising concerns that it will erode their usability for research and distribution of state and federal funds. The Census Bureau is due to release the first major results of the decennial count in mid-August. They will offer the first detailed look at the population and racial makeup of thousands of counties and cities, as well as tribal areas, neighborhoods, school districts and smaller areas that will be used to redraw congressional, legislative and local districts to balance their populations.
The U.S. Census Bureau will use a complex algorithm to adjust 2020 Census statistics to prevent the data from being recombined to disclose information about individual respondents. The bureau's Ron Jarmin said it will use differential privacy, an approach it has long employed in some fashion, which involves adding statistical noise to data. Small random numbers, both positive and negative, will be used to adjust most of the Census totals, with inconsistent subtotals squared up. The Bureau indicated that for most groups and places, this will result in fairly accurate totals, although distortion is likely to be higher for smaller groups and areas like census blocks. This has raised concerns among local officials, as population-based formulas are used to allocate billions of dollars in federal and state aid. University of Minnesota researchers said after a fifth test of the method that "major discrepancies remain for minority populations."
[]
[]
[]
scitechnews
None
None
None
None
The U.S. Census Bureau will use a complex algorithm to adjust 2020 Census statistics to prevent the data from being recombined to disclose information about individual respondents. The bureau's Ron Jarmin said it will use differential privacy, an approach it has long employed in some fashion, which involves adding statistical noise to data. Small random numbers, both positive and negative, will be used to adjust most of the Census totals, with inconsistent subtotals squared up. The Bureau indicated that for most groups and places, this will result in fairly accurate totals, although distortion is likely to be higher for smaller groups and areas like census blocks. This has raised concerns among local officials, as population-based formulas are used to allocate billions of dollars in federal and state aid. University of Minnesota researchers said after a fifth test of the method that "major discrepancies remain for minority populations." A plan to protect the confidentiality of Americans' responses to the 2020 census by injecting small, calculated distortions into the results is raising concerns that it will erode their usability for research and distribution of state and federal funds. The Census Bureau is due to release the first major results of the decennial count in mid-August. They will offer the first detailed look at the population and racial makeup of thousands of counties and cities, as well as tribal areas, neighborhoods, school districts and smaller areas that will be used to redraw congressional, legislative and local districts to balance their populations.
18
Robot Apocalypse Hard to Find in America's Small, Mid-Sized Factories
CLEVELAND, Aug 2 (Reuters) - When researchers from the Massachusetts Institute of Technology visited Rich Gent's machine shop here to see how automation was spreading to America's small and medium-sized factories, they expected to find robots. They did not. "In big factories - when you're making the same thing over and over, day after day, robots make total sense," said Gent, who with his brother runs Gent Machine Co, a 55-employee company founded by his great-grandfather, "but not for us." Even as some analysts warn that robots are about to displace millions of blue-collar jobs in the U.S. industrial heartland, the reality at smaller operations like Gent is far different. Among the 34 companies with 500 employees or fewer in Ohio, Massachusetts and Arizona that the MIT researchers visited in their project, only one had bought robots in large numbers in the last five years - and that was an Ohio company that had been acquired by a Japanese multinational which pumped in money for the new automation. In all the other Ohio plants they studied, they found only a single robot purchased in the last five years. In Massachusetts they found a company that had bought two, while in Arizona they found three companies that had added a handful. Anna Waldman-Brown, a PhD student who worked on the report with MIT Professor Suzanne Berger, said she was "surprised" by the lack of the machines. "We had a roboticist on our research team, because we expected to find robots," she said. Instead, at one company, she said managers showed them a computer they had recently installed in a corner of the factory - which allowed workers to note their daily production figures on a spreadsheet, rather than jot down that information in paper notebooks. "The bulk of the machines we saw were from before the 1990s," she said, adding that many had installed new computer controllers to upgrade the older machines - a common practice in these tight-fisted operations. Most had also bought other types of advanced machinery - such as computer-guided cutting machines and inspection systems. But not robots. Robots are just one type of factory automation, which encompasses a wide range of machines used to move and manufacture goods - including conveyor belts and labeling machines. Nick Pinkston, CEO of Volition, a San Francisco company that makes software used by robotics engineers to automate factories, said smaller firms lack the cash to take risks on new robots. "They think of capital payback periods of as little as three months, or six - and it all depends on the contract" with the consumer who is ordering parts to be made by the machine. This is bad news for the U.S. economy. Automation is a key to boosting productivity, which keeps U.S. operations competitive. Since 2005, U.S. labor productivity has grown at an average annual rate of only 1.3% - below the post-World War 2 trend of well over 2% - and the average has dipped even more since 2010. Researchers have found that larger firms are more productive on average and pay higher wages than their smaller counterparts, a divergence attributed at least in part to the ability of industry giants to invest heavily in cutting-edge technologies. Yet small and medium-sized manufacturers remain a backbone of U.S. industry, often churning out parts needed to keep assembly lines rolling at big manufacturers. If they fall behind on technology, it could weigh on the entire sector. These small and medium-sized manufacturers are also a key source of relatively good jobs - accounting for 43% of all manufacturing workers. LIMITATIONS OF ROBOTS One barrier for smaller companies is finding the skilled workers needed to run robots. "There's a lot of amazing software that's making robots easier to program and repurpose - but not nearly enough people to do that work," said Ryan Kelly, who heads a group that promotes new technology to manufacturers inside the Association for Manufacturing Technology. To be sure, robots are spreading to more corners of the industrial economy, just not as quickly as the MIT researchers and many others expected. Last year, for the first time, most of the robots ordered by companies in North America were not destined for automotive factories - a shift partly attributed to the development of cheaper and more flexible machines. Those are the type of machines especially needed in smaller operations. And it seems certain robots will take over more jobs as they become more capable and affordable. One example: their rapid spread in e-commerce warehouses in recent years. Carmakers and other big companies still buy most robots, said Jeff Burnstein, president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan. "But there's a lot more in small and medium-size companies than ever before." Michael Tamasi, owner of AccuRounds in Avon, Massachusetts, is a small manufacturer who recently bought a robot attached to a computer-controlled cutting machine. "We're getting another machine delivered in September - and hope to attach a robot arm to that one to load and unload it," he said. But there are some tasks where the technology remains too rigid or simply not capable of getting the job done. For instance, Tamasi recently looked at buying a robot to polish metal parts. But the complexity of the shape made it impossible. "And it was kind of slow," he said. "When you think of robots, you think better, faster, cheaper - but this was kind of the opposite." And he still needed a worker to load and unload the machine. For a company like Cleveland's Gent, which makes parts for things like refrigerators, auto airbags and hydraulic pumps, the main barrier to getting robots is the cost and uncertainty over whether the investment will pay off, which in turn hinges on the plans and attitudes of customers. And big customers can be fickle. Eight years ago, Gent landed a contract to supply fasteners used to put together battery packs for Tesla Inc (TSLA.O) - and the electric-car maker soon became its largest customer. But Gent never got assurances from Tesla that the business would continue for long enough to justify buying the robots it could have used to make the fasteners. "If we'd known Tesla would go on that long, we definitely would have automated our assembly process," said Gent, who said they looked at automating the line twice over the years. But he does not regret his caution. Earlier this year, Tesla notified Gent that it was pulling the business. "We're not bitter," said Gent. "It's just how it works." Gent does spend heavily on new equipment, relative to its small size - about $500,000 a year from 2011 to 2019. One purchase was a $1.6 million computer-controlled cutting machine that cut the cycle time to make the Tesla parts down from 38 seconds to 7 seconds - a major gain in productivity that flowed straight to Gent's bottom line. "We found another part to make on the machine," said Gent. Our Standards: The Thomson Reuters Trust Principles.
Although analysts have warned that millions of blue-collar jobs in the U.S. industrial heartland will soon be displaced by robots, that is not yet the case at small and medium-sized factories. Massachusetts Institute of Technology (MIT) researchers studied 34 companies with 500 or fewer employees in Ohio, Massachusetts, and Arizona, and found just one had acquired a significant number of robots in the past five years. MIT's Anna Waldman-Brown said, "The bulk of the machines we saw were from before the 1990s," and many older machines were upgraded with new computer controllers. Other companies have purchased advanced equipment like computer-guided cutting machines and inspection systems, but not robots, the researchers found, because smaller companies lack the money for robots or the skilled workers necessary to operate them.
[]
[]
[]
scitechnews
None
None
None
None
Although analysts have warned that millions of blue-collar jobs in the U.S. industrial heartland will soon be displaced by robots, that is not yet the case at small and medium-sized factories. Massachusetts Institute of Technology (MIT) researchers studied 34 companies with 500 or fewer employees in Ohio, Massachusetts, and Arizona, and found just one had acquired a significant number of robots in the past five years. MIT's Anna Waldman-Brown said, "The bulk of the machines we saw were from before the 1990s," and many older machines were upgraded with new computer controllers. Other companies have purchased advanced equipment like computer-guided cutting machines and inspection systems, but not robots, the researchers found, because smaller companies lack the money for robots or the skilled workers necessary to operate them. CLEVELAND, Aug 2 (Reuters) - When researchers from the Massachusetts Institute of Technology visited Rich Gent's machine shop here to see how automation was spreading to America's small and medium-sized factories, they expected to find robots. They did not. "In big factories - when you're making the same thing over and over, day after day, robots make total sense," said Gent, who with his brother runs Gent Machine Co, a 55-employee company founded by his great-grandfather, "but not for us." Even as some analysts warn that robots are about to displace millions of blue-collar jobs in the U.S. industrial heartland, the reality at smaller operations like Gent is far different. Among the 34 companies with 500 employees or fewer in Ohio, Massachusetts and Arizona that the MIT researchers visited in their project, only one had bought robots in large numbers in the last five years - and that was an Ohio company that had been acquired by a Japanese multinational which pumped in money for the new automation. In all the other Ohio plants they studied, they found only a single robot purchased in the last five years. In Massachusetts they found a company that had bought two, while in Arizona they found three companies that had added a handful. Anna Waldman-Brown, a PhD student who worked on the report with MIT Professor Suzanne Berger, said she was "surprised" by the lack of the machines. "We had a roboticist on our research team, because we expected to find robots," she said. Instead, at one company, she said managers showed them a computer they had recently installed in a corner of the factory - which allowed workers to note their daily production figures on a spreadsheet, rather than jot down that information in paper notebooks. "The bulk of the machines we saw were from before the 1990s," she said, adding that many had installed new computer controllers to upgrade the older machines - a common practice in these tight-fisted operations. Most had also bought other types of advanced machinery - such as computer-guided cutting machines and inspection systems. But not robots. Robots are just one type of factory automation, which encompasses a wide range of machines used to move and manufacture goods - including conveyor belts and labeling machines. Nick Pinkston, CEO of Volition, a San Francisco company that makes software used by robotics engineers to automate factories, said smaller firms lack the cash to take risks on new robots. "They think of capital payback periods of as little as three months, or six - and it all depends on the contract" with the consumer who is ordering parts to be made by the machine. This is bad news for the U.S. economy. Automation is a key to boosting productivity, which keeps U.S. operations competitive. Since 2005, U.S. labor productivity has grown at an average annual rate of only 1.3% - below the post-World War 2 trend of well over 2% - and the average has dipped even more since 2010. Researchers have found that larger firms are more productive on average and pay higher wages than their smaller counterparts, a divergence attributed at least in part to the ability of industry giants to invest heavily in cutting-edge technologies. Yet small and medium-sized manufacturers remain a backbone of U.S. industry, often churning out parts needed to keep assembly lines rolling at big manufacturers. If they fall behind on technology, it could weigh on the entire sector. These small and medium-sized manufacturers are also a key source of relatively good jobs - accounting for 43% of all manufacturing workers. LIMITATIONS OF ROBOTS One barrier for smaller companies is finding the skilled workers needed to run robots. "There's a lot of amazing software that's making robots easier to program and repurpose - but not nearly enough people to do that work," said Ryan Kelly, who heads a group that promotes new technology to manufacturers inside the Association for Manufacturing Technology. To be sure, robots are spreading to more corners of the industrial economy, just not as quickly as the MIT researchers and many others expected. Last year, for the first time, most of the robots ordered by companies in North America were not destined for automotive factories - a shift partly attributed to the development of cheaper and more flexible machines. Those are the type of machines especially needed in smaller operations. And it seems certain robots will take over more jobs as they become more capable and affordable. One example: their rapid spread in e-commerce warehouses in recent years. Carmakers and other big companies still buy most robots, said Jeff Burnstein, president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan. "But there's a lot more in small and medium-size companies than ever before." Michael Tamasi, owner of AccuRounds in Avon, Massachusetts, is a small manufacturer who recently bought a robot attached to a computer-controlled cutting machine. "We're getting another machine delivered in September - and hope to attach a robot arm to that one to load and unload it," he said. But there are some tasks where the technology remains too rigid or simply not capable of getting the job done. For instance, Tamasi recently looked at buying a robot to polish metal parts. But the complexity of the shape made it impossible. "And it was kind of slow," he said. "When you think of robots, you think better, faster, cheaper - but this was kind of the opposite." And he still needed a worker to load and unload the machine. For a company like Cleveland's Gent, which makes parts for things like refrigerators, auto airbags and hydraulic pumps, the main barrier to getting robots is the cost and uncertainty over whether the investment will pay off, which in turn hinges on the plans and attitudes of customers. And big customers can be fickle. Eight years ago, Gent landed a contract to supply fasteners used to put together battery packs for Tesla Inc (TSLA.O) - and the electric-car maker soon became its largest customer. But Gent never got assurances from Tesla that the business would continue for long enough to justify buying the robots it could have used to make the fasteners. "If we'd known Tesla would go on that long, we definitely would have automated our assembly process," said Gent, who said they looked at automating the line twice over the years. But he does not regret his caution. Earlier this year, Tesla notified Gent that it was pulling the business. "We're not bitter," said Gent. "It's just how it works." Gent does spend heavily on new equipment, relative to its small size - about $500,000 a year from 2011 to 2019. One purchase was a $1.6 million computer-controlled cutting machine that cut the cycle time to make the Tesla parts down from 38 seconds to 7 seconds - a major gain in productivity that flowed straight to Gent's bottom line. "We found another part to make on the machine," said Gent. Our Standards: The Thomson Reuters Trust Principles.
19
Insulator-Conductor Transition Points Toward Ultra-Efficient Computing
For the first time, researchers have been able to image how atoms in a computer switch move around on fast timescales while it turns on and off. This ability to peer into the atomic world may hold the key to a new kind of switch for computers that will speed up computing and reduce the energy required for computer processing. The research team made up of scientists from the Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Penn State University and Purdue University were able to capture snapshots of atomic motion in a device while it was switching. The researchers believe that the new insights this technique will generate into how switches operate will not only improve future switch technology, but will also resolve the ultimate speed and energy-consumption limits for computing devices. Switches in computer chips control the flow of electrons. By applying an electrical charge to the switch and then removing that charge, the switch can be turned back and forth between acting as an insulator that blocks the flow of electrons to a conductor that allows the flow of electrons. This on/off switch is the basis for the "0-1" of binary computer logic. While studying a switch made from vanadium dioxide, the researchers were able to detect with their imaging technique the existence of a short-lived transition stage between the material going from an insulator to a conductor and then back again. "In this transient state , the structure remains the same as in the starting insulating state, but there is electronic reorganization which causes it to become metallic," explained Aditya Sood , a postdoctoral researcher at SLAC National Lab & Stanford University. "We infer this from subtle signatures in how the electron diffraction pattern changes during this electrically-driven transition." In order to observe this transient state, the researchers had to develop a real-time imaging technology based on electron diffraction. Electron diffraction by itself has existed for many decades and is used routinely in transmission electron microscopes (TEMs). But in these previous kinds of applications, electron imaging was used just to study a material's structure in a static way, or to probe its evolution on slow timescales. While ultrafast electron diffraction (UED) has been developed to make time-resolved measurements of atomic structure, previous implementations of this technique relied on optical pulses to impulsively excite (or "pump") materials and image the resulting atomic motions. What the scientists did here for the first time in this research was create an ultrafast technique in which electrical (not optical) pulses provide the impulsive excitation. This makes it possible to electrically pulse a device, look at the ensuing atomic scale motions on fast timescales (down to nanoseconds), while simultaneously measuring current through the device. The team used electrical pulses, shown here in blue, to turn their custom-made switches on and off several times. They timed these electrical pulses to arrive just before the electron pulses produced by SLAC's ultrafast electron diffraction source MeV-UED, which captured the atomic motions. Greg Stewart/SLAC National Accelerator Laboratory "We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales," said Sood. To do this, the researchers built a new apparatus that integrated an electronic device to which they could apply fast electrical bias pulses, such that each electrical bias pulse was followed by a "probing" electron pulse (which creates a diffraction pattern, telling us about where the atoms are) with a controllable time delay. "By repeating this many times, each time changing the time delay, we could effectively construct a movie of the atomic movements during and after electrical biasing," explained Sood. Additionally, the researchers built an electrical circuit around the device to be able to concurrently measure the current flowing through during the transient switching process. While custom-made vanadium-dioxide-based switches were fabricated for the sake of this research, Sood says that the technique could work on any kind of switch just as long as the switch is 100 nanometers or thinner to allow electrons to be transmitted through it. "It would be interesting to see if the multi-stage, transient switching phenomenon we observe in our vanadium-dioxide-based devices is found more broadly across the solid-state device landscape," said Sood. "We are thrilled by the prospect of looking at some of the emerging memory and logic technologies, where for the first time, we can visualize ultrafast atomic motions occurring during switching." Aaron Lindenberg , a professor in the Department of Materials Science and Engineering at Stanford and a collaborator with Sood on this work said, "More generally, this work also opens up new possibilities for using electric fields to synthesize and stabilize new materials with potentially useful functional properties." The group's research was published in a recent issue of the journal Science .
A team of researchers has imaged the movement of atoms in a computer switch turning on and off in real time, which could help lead to super-efficient computing. Researchers at the U.S. Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Pennsylvania State University, and Purdue University used the method to detect a short-lived transition stage between insulator-conductor flipping in a vanadium dioxide switch. The ultrafast electron diffraction technique uses electrical rather than optical pulses to supply the impulsive atomic excitation, exposing atomic-scale motions on fast timescales and measuring current through the device. Stanford's Aditya Sood said, "We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales."
[]
[]
[]
scitechnews
None
None
None
None
A team of researchers has imaged the movement of atoms in a computer switch turning on and off in real time, which could help lead to super-efficient computing. Researchers at the U.S. Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Pennsylvania State University, and Purdue University used the method to detect a short-lived transition stage between insulator-conductor flipping in a vanadium dioxide switch. The ultrafast electron diffraction technique uses electrical rather than optical pulses to supply the impulsive atomic excitation, exposing atomic-scale motions on fast timescales and measuring current through the device. Stanford's Aditya Sood said, "We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales." For the first time, researchers have been able to image how atoms in a computer switch move around on fast timescales while it turns on and off. This ability to peer into the atomic world may hold the key to a new kind of switch for computers that will speed up computing and reduce the energy required for computer processing. The research team made up of scientists from the Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Penn State University and Purdue University were able to capture snapshots of atomic motion in a device while it was switching. The researchers believe that the new insights this technique will generate into how switches operate will not only improve future switch technology, but will also resolve the ultimate speed and energy-consumption limits for computing devices. Switches in computer chips control the flow of electrons. By applying an electrical charge to the switch and then removing that charge, the switch can be turned back and forth between acting as an insulator that blocks the flow of electrons to a conductor that allows the flow of electrons. This on/off switch is the basis for the "0-1" of binary computer logic. While studying a switch made from vanadium dioxide, the researchers were able to detect with their imaging technique the existence of a short-lived transition stage between the material going from an insulator to a conductor and then back again. "In this transient state , the structure remains the same as in the starting insulating state, but there is electronic reorganization which causes it to become metallic," explained Aditya Sood , a postdoctoral researcher at SLAC National Lab & Stanford University. "We infer this from subtle signatures in how the electron diffraction pattern changes during this electrically-driven transition." In order to observe this transient state, the researchers had to develop a real-time imaging technology based on electron diffraction. Electron diffraction by itself has existed for many decades and is used routinely in transmission electron microscopes (TEMs). But in these previous kinds of applications, electron imaging was used just to study a material's structure in a static way, or to probe its evolution on slow timescales. While ultrafast electron diffraction (UED) has been developed to make time-resolved measurements of atomic structure, previous implementations of this technique relied on optical pulses to impulsively excite (or "pump") materials and image the resulting atomic motions. What the scientists did here for the first time in this research was create an ultrafast technique in which electrical (not optical) pulses provide the impulsive excitation. This makes it possible to electrically pulse a device, look at the ensuing atomic scale motions on fast timescales (down to nanoseconds), while simultaneously measuring current through the device. The team used electrical pulses, shown here in blue, to turn their custom-made switches on and off several times. They timed these electrical pulses to arrive just before the electron pulses produced by SLAC's ultrafast electron diffraction source MeV-UED, which captured the atomic motions. Greg Stewart/SLAC National Accelerator Laboratory "We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales," said Sood. To do this, the researchers built a new apparatus that integrated an electronic device to which they could apply fast electrical bias pulses, such that each electrical bias pulse was followed by a "probing" electron pulse (which creates a diffraction pattern, telling us about where the atoms are) with a controllable time delay. "By repeating this many times, each time changing the time delay, we could effectively construct a movie of the atomic movements during and after electrical biasing," explained Sood. Additionally, the researchers built an electrical circuit around the device to be able to concurrently measure the current flowing through during the transient switching process. While custom-made vanadium-dioxide-based switches were fabricated for the sake of this research, Sood says that the technique could work on any kind of switch just as long as the switch is 100 nanometers or thinner to allow electrons to be transmitted through it. "It would be interesting to see if the multi-stage, transient switching phenomenon we observe in our vanadium-dioxide-based devices is found more broadly across the solid-state device landscape," said Sood. "We are thrilled by the prospect of looking at some of the emerging memory and logic technologies, where for the first time, we can visualize ultrafast atomic motions occurring during switching." Aaron Lindenberg , a professor in the Department of Materials Science and Engineering at Stanford and a collaborator with Sood on this work said, "More generally, this work also opens up new possibilities for using electric fields to synthesize and stabilize new materials with potentially useful functional properties." The group's research was published in a recent issue of the journal Science .
20
AI Carpenter Can Recreate Furniture From Photos
An algorithm developed by University of Washington (UW) researchers can render photos of wooden objects into three-dimensional (3D) models with enough detail to be replicated by carpenters. The researchers factored in the geometric limitations of flat sheets of wood and how wooden parts can interlock. They captured photos of wooden items with a smartphone, and the algorithm generated accurate plans for their construction after less than 10 minutes of processing. Said UW's James Noeckel, "It doesn't really require that you observe the object completely because we make these assumptions about how objects are fabricated. We don't need to take pictures of every single surface, which is something you would need for a traditional 3D reconstruction algorithm to get complete shapes."
[]
[]
[]
scitechnews
None
None
None
None
An algorithm developed by University of Washington (UW) researchers can render photos of wooden objects into three-dimensional (3D) models with enough detail to be replicated by carpenters. The researchers factored in the geometric limitations of flat sheets of wood and how wooden parts can interlock. They captured photos of wooden items with a smartphone, and the algorithm generated accurate plans for their construction after less than 10 minutes of processing. Said UW's James Noeckel, "It doesn't really require that you observe the object completely because we make these assumptions about how objects are fabricated. We don't need to take pictures of every single surface, which is something you would need for a traditional 3D reconstruction algorithm to get complete shapes."
21
Developers Reveal Programming Languages They Love, Dread
Programmer online community Stack Overflow's 2021 survey of 83,439 software developers in 181 countries found the vast majority (86.69%) named Mozilla's Rust their "most loved" language. Those respondents cited Rust as the language they worked with the most in the past year, and with which they most want to work with next year. Rust is popular for systems programming, and is under consideration for Linux kernel development, partly because it can help remove memory-related security flaws. Though deemed most loved, Rust was nominated to the survey by just 5,044 developers, while 18,711 respondents nominated Microsoft's TypeScript, the third most "loved" language; TypeScript compiles into JavaScript and helps developers more efficiently program large front-end Web applications. More developers dreaded (66%) than loved (39.56%) the widely-used C language, while Java also had fewer champions (47%) than those opposed dreading its use (52.85%).
[]
[]
[]
scitechnews
None
None
None
None
Programmer online community Stack Overflow's 2021 survey of 83,439 software developers in 181 countries found the vast majority (86.69%) named Mozilla's Rust their "most loved" language. Those respondents cited Rust as the language they worked with the most in the past year, and with which they most want to work with next year. Rust is popular for systems programming, and is under consideration for Linux kernel development, partly because it can help remove memory-related security flaws. Though deemed most loved, Rust was nominated to the survey by just 5,044 developers, while 18,711 respondents nominated Microsoft's TypeScript, the third most "loved" language; TypeScript compiles into JavaScript and helps developers more efficiently program large front-end Web applications. More developers dreaded (66%) than loved (39.56%) the widely-used C language, while Java also had fewer champions (47%) than those opposed dreading its use (52.85%).
22
Apps That Are Redefining Accessibility
Some estimate less than 10% of websites are accessible, meaning they provide assistance in accessing their content to people with visual disabilities. Some companies are tackling the issue by rolling out apps that can be used by anyone, regardless of visual capabilities. One example is Finnish developer Ilkka Pirttimaa, whose BlindSquare app incorporates Open Street Map and Foursquare data to help the visually impaired navigate streets; the app also integrates with ride-hailing apps like Uber. The Be My Eyes app connects visually impaired individuals to sighted volunteers via live video calls for assistance with everyday tasks, while the AccessNow app and website map and reviews locations on their accessibility. AccessNow's Maayan Ziv said, "Accessibility is one more way in which you can invite people to be part of something, and it really does touch every kind of industry."
[]
[]
[]
scitechnews
None
None
None
None
Some estimate less than 10% of websites are accessible, meaning they provide assistance in accessing their content to people with visual disabilities. Some companies are tackling the issue by rolling out apps that can be used by anyone, regardless of visual capabilities. One example is Finnish developer Ilkka Pirttimaa, whose BlindSquare app incorporates Open Street Map and Foursquare data to help the visually impaired navigate streets; the app also integrates with ride-hailing apps like Uber. The Be My Eyes app connects visually impaired individuals to sighted volunteers via live video calls for assistance with everyday tasks, while the AccessNow app and website map and reviews locations on their accessibility. AccessNow's Maayan Ziv said, "Accessibility is one more way in which you can invite people to be part of something, and it really does touch every kind of industry."
23
Security Bug Affects Nearly All Hospitals in North America
Researchers from the IoT security firm Armis have discovered nine critical vulnerabilities in the Nexus Control Panel which is used to power all current models of Translogic's pneumatic tube system (PTS) stations by Swisslog Healthcare. The vulnerabilities have been given the name PwnedPiper and are particularly concerning due to the fact that the Translogic PTS system is used in 3,000 hospitals worldwide including in more than 80 percent of major hospitals in North America. The system is used to deliver medications, blood products and various lab samples across multiple departments at the hospitals where it is used. The PwnedPiper vulnerabilities can be exploited by an unauthenticated hacker to take over PTS stations and gain full control over a target hospital's tube network. With this control, cybercriminals could launch ransomware attacks that range from denial-of-service to full-blown man-in-the-middle attacks ( MITM ) that can alter the paths of a networks' carriers to deliberately sabotage hospitals. Despite the prevalence of modern PTS systems that are IP-connected and found in many hospitals, the security of these systems has never been thoroughly analyzed or researched until now. Of the nine PwnedPiper vulnerabilities discovered by Armis, five of them can be used to achieve remote code execution , gain access to a hospital's network and take over Nexus stations. By compromising a Nexus station, an attacker can use it for reconnaissance to harvest data from the station including RFID credentials of employees that use the PTS system, details about the functions or locations of each system and gain an understanding of the physical layout of a hospital's PTS network. From here, an attacker can take over all Nexus stations in a hospital's tube network and then hold them hostage in a ransomware attack. VP of Research at Armis, Ben Seri provided further insight in a press release on how the company worked with Swisslog to patch the PwnedPiper vulnerabilities it discovered, saying: "Armis disclosed the vulnerabilities to Swisslog on May 1, 2021, and has been working with the manufacturer to test the available patch and ensure proper security measures will be provided to customers. With so many hospitals reliant on this technology we've worked diligently to address these vulnerabilities to increase cyber resiliency in these healthcare environments, where lives are on the line." Armis will present its research on PwnedPiper at this year's Black Hat USA security conference and as of now, only one of the nine vulnerabilities remains unpatched.
Researchers at security firm Armis identified nine critical vulnerabilities in the Nexus Control Panel that powers all current models of Swisslog Healthcare's Translogic pneumatic tube system (PTS) stations. The Translogic PTS system is used in 3,000 hospitals worldwide and 80% of major hospitals in North America to deliver medications, blood products, and lab samples across multiple hospital departments. Hackers can exploit the vulnerabilities, dubbed PwnedPiper, to gain control over a hospital's pneumatic tube network, with the potential to launch ransomware attacks. Armis' Ben Seri said his firm had told Swisslog of the vulnerabilities at the beginning of May, "and has been working with the manufacturer to test the available patch and ensure proper security measures will be provided to customers."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at security firm Armis identified nine critical vulnerabilities in the Nexus Control Panel that powers all current models of Swisslog Healthcare's Translogic pneumatic tube system (PTS) stations. The Translogic PTS system is used in 3,000 hospitals worldwide and 80% of major hospitals in North America to deliver medications, blood products, and lab samples across multiple hospital departments. Hackers can exploit the vulnerabilities, dubbed PwnedPiper, to gain control over a hospital's pneumatic tube network, with the potential to launch ransomware attacks. Armis' Ben Seri said his firm had told Swisslog of the vulnerabilities at the beginning of May, "and has been working with the manufacturer to test the available patch and ensure proper security measures will be provided to customers." Researchers from the IoT security firm Armis have discovered nine critical vulnerabilities in the Nexus Control Panel which is used to power all current models of Translogic's pneumatic tube system (PTS) stations by Swisslog Healthcare. The vulnerabilities have been given the name PwnedPiper and are particularly concerning due to the fact that the Translogic PTS system is used in 3,000 hospitals worldwide including in more than 80 percent of major hospitals in North America. The system is used to deliver medications, blood products and various lab samples across multiple departments at the hospitals where it is used. The PwnedPiper vulnerabilities can be exploited by an unauthenticated hacker to take over PTS stations and gain full control over a target hospital's tube network. With this control, cybercriminals could launch ransomware attacks that range from denial-of-service to full-blown man-in-the-middle attacks ( MITM ) that can alter the paths of a networks' carriers to deliberately sabotage hospitals. Despite the prevalence of modern PTS systems that are IP-connected and found in many hospitals, the security of these systems has never been thoroughly analyzed or researched until now. Of the nine PwnedPiper vulnerabilities discovered by Armis, five of them can be used to achieve remote code execution , gain access to a hospital's network and take over Nexus stations. By compromising a Nexus station, an attacker can use it for reconnaissance to harvest data from the station including RFID credentials of employees that use the PTS system, details about the functions or locations of each system and gain an understanding of the physical layout of a hospital's PTS network. From here, an attacker can take over all Nexus stations in a hospital's tube network and then hold them hostage in a ransomware attack. VP of Research at Armis, Ben Seri provided further insight in a press release on how the company worked with Swisslog to patch the PwnedPiper vulnerabilities it discovered, saying: "Armis disclosed the vulnerabilities to Swisslog on May 1, 2021, and has been working with the manufacturer to test the available patch and ensure proper security measures will be provided to customers. With so many hospitals reliant on this technology we've worked diligently to address these vulnerabilities to increase cyber resiliency in these healthcare environments, where lives are on the line." Armis will present its research on PwnedPiper at this year's Black Hat USA security conference and as of now, only one of the nine vulnerabilities remains unpatched.
24
Platform Teaches Nonexperts to Use ML
Machine-learning algorithms are used to find patterns in data that humans wouldn't otherwise notice, and are being deployed to help inform decisions big and small - from COVID-19 vaccination development to Netflix recommendations. New award-winning research from the Cornell Ann S. Bowers College of Computing and Information Science explores how to help nonexperts effectively, efficiently and ethically use machine-learning algorithms to better enable industries beyond the computing field to harness the power of AI. "We don't know much about how nonexperts in machine learning come to learn algorithmic tools," said Swati Mishra, a Ph.D. student in the field of information science. "The reason is that there's a hype that's developed that suggests machine learning is for the ordained." Mishra is lead author of " Designing Interactive Transfer Learning Tools for ML Non-Experts ," which received a Best Paper Award at the annual ACM CHI Virtual Conference on Human Factors in Computing Systems, held in May. As machine learning has entered fields and industries traditionally outside of computing, the need for research and effective, accessible tools to enable new users in leveraging artificial intelligence is unprecedented, Mishra said. Existing research into these interactive machine-learning systems has mostly focused on understanding the users and the challenges they face when navigating the tools. Mishra's latest research - including the development of her own interactive machine-learning platform - breaks fresh ground by investigating the inverse: How to better design the system so that users with limited algorithmic expertise but vast domain expertise can learn to integrate preexisting models into their own work. "When you do a task, you know what parts need manual fixing and what needs automation," said Mishra, a 2021-2022 Bloomberg Data Science Ph.D. fellow. "If we design machine-learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine-learning model." Mishra takes an unconventional approach with this research by turning to a complex process called "transfer learning" as a jumping-off point to initiate nonexperts into machine learning. Transfer learning is a high-level and powerful machine-learning technique typically reserved for experts, wherein users repurpose and tweak existing, pretrained machine-learning models for new tasks. The technique alleviates the need to build a model from scratch, which requires lots of training data, allowing the user to repurpose a model trained to identify images of dogs, say, into a model that can identify cats or, with the right expertise, even skin cancers. "By intentionally focusing on appropriating existing models into new tasks, Swati's work helps novices not only use machine learning to solve complex tasks, but also take advantage of machine-learning experts' continuing developments," said Jeff Rzeszotarski , assistant professor in the Department of Information Science and the paper's senior author. "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks." Mishra's research exposes transfer learning's inner computational workings through an interactive platform so nonexperts can better understand how machines crunch datasets and make decisions. Through a corresponding lab study with people with no background in machine-learning development, Mishra was able to pinpoint precisely where beginners lost their way, what their rationales were for making certain tweaks to the model and what approaches were most successful or unsuccessful. In the end, the duo found participating nonexperts were able to successfully use transfer learning and alter existing models for their own purposes. However, researchers discovered that inaccurate perceptions of machine intelligence frequently slowed learning among nonexperts. Machines don't learn like humans do, Mishra said. "We're used to a human-like learning style, and intuitively we tend to employ strategies that are familiar to us," she said. "If the tools do not explicitly convey this difference, the machines may never really learn. We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations." Lou DiPietro is a communications specialist for the Department of Information Science.
An interactive machine learning (ML) platform developed by Cornell University scientists is designed to train nonexperts to use algorithms effectively, efficiently, and ethically. Cornell's Swati Mishra said, "If we design machine learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine learning model." Said Cornell's Jeff Rzeszotarski, "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks." Added Mishra, "We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations."
[]
[]
[]
scitechnews
None
None
None
None
An interactive machine learning (ML) platform developed by Cornell University scientists is designed to train nonexperts to use algorithms effectively, efficiently, and ethically. Cornell's Swati Mishra said, "If we design machine learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine learning model." Said Cornell's Jeff Rzeszotarski, "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks." Added Mishra, "We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations." Machine-learning algorithms are used to find patterns in data that humans wouldn't otherwise notice, and are being deployed to help inform decisions big and small - from COVID-19 vaccination development to Netflix recommendations. New award-winning research from the Cornell Ann S. Bowers College of Computing and Information Science explores how to help nonexperts effectively, efficiently and ethically use machine-learning algorithms to better enable industries beyond the computing field to harness the power of AI. "We don't know much about how nonexperts in machine learning come to learn algorithmic tools," said Swati Mishra, a Ph.D. student in the field of information science. "The reason is that there's a hype that's developed that suggests machine learning is for the ordained." Mishra is lead author of " Designing Interactive Transfer Learning Tools for ML Non-Experts ," which received a Best Paper Award at the annual ACM CHI Virtual Conference on Human Factors in Computing Systems, held in May. As machine learning has entered fields and industries traditionally outside of computing, the need for research and effective, accessible tools to enable new users in leveraging artificial intelligence is unprecedented, Mishra said. Existing research into these interactive machine-learning systems has mostly focused on understanding the users and the challenges they face when navigating the tools. Mishra's latest research - including the development of her own interactive machine-learning platform - breaks fresh ground by investigating the inverse: How to better design the system so that users with limited algorithmic expertise but vast domain expertise can learn to integrate preexisting models into their own work. "When you do a task, you know what parts need manual fixing and what needs automation," said Mishra, a 2021-2022 Bloomberg Data Science Ph.D. fellow. "If we design machine-learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine-learning model." Mishra takes an unconventional approach with this research by turning to a complex process called "transfer learning" as a jumping-off point to initiate nonexperts into machine learning. Transfer learning is a high-level and powerful machine-learning technique typically reserved for experts, wherein users repurpose and tweak existing, pretrained machine-learning models for new tasks. The technique alleviates the need to build a model from scratch, which requires lots of training data, allowing the user to repurpose a model trained to identify images of dogs, say, into a model that can identify cats or, with the right expertise, even skin cancers. "By intentionally focusing on appropriating existing models into new tasks, Swati's work helps novices not only use machine learning to solve complex tasks, but also take advantage of machine-learning experts' continuing developments," said Jeff Rzeszotarski , assistant professor in the Department of Information Science and the paper's senior author. "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks." Mishra's research exposes transfer learning's inner computational workings through an interactive platform so nonexperts can better understand how machines crunch datasets and make decisions. Through a corresponding lab study with people with no background in machine-learning development, Mishra was able to pinpoint precisely where beginners lost their way, what their rationales were for making certain tweaks to the model and what approaches were most successful or unsuccessful. In the end, the duo found participating nonexperts were able to successfully use transfer learning and alter existing models for their own purposes. However, researchers discovered that inaccurate perceptions of machine intelligence frequently slowed learning among nonexperts. Machines don't learn like humans do, Mishra said. "We're used to a human-like learning style, and intuitively we tend to employ strategies that are familiar to us," she said. "If the tools do not explicitly convey this difference, the machines may never really learn. We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations." Lou DiPietro is a communications specialist for the Department of Information Science.
25
Robotic Police Dogs: Useful Hounds or Dehumanizing Machines?
HONOLULU (AP) - If you're homeless and looking for temporary shelter in Hawaii's capital, expect a visit from a robotic police dog that will scan your eye to make sure you don't have a fever. That's just one of the ways public safety agencies are starting to use Spot, the best-known of a new commercial category of robots that trot around with animal-like agility. The handful of police officials experimenting with the four-legged machines say they're just another tool, like existing drones and simple wheeled robots, to keep emergency responders out of harm's way as they scout for dangers. But privacy watchdogs - the human kind - warn that police are secretly rushing to buy the robots without setting safeguards against aggressive, invasive or dehumanizing uses. In Honolulu, the police department spent about $150,000 in federal pandemic relief money to buy their Spot from robotics firm Boston Dynamics for use at a government-run tent city near the airport. "Because these people are houseless it's considered OK to do that," said Jongwook Kim, legal director at the American Civil Liberties Union of Hawaii. "At some point it will come out again for some different use after the pandemic is over." Acting Lt. Joseph O'Neal of the Honolulu Police Department's community outreach unit defended the robot's use in a media demonstration earlier this year. He said it has protected officers, shelter staff and residents by scanning body temperatures between meal times at a shelter where homeless people could quarantine and get tested for COVID-19. The robot is also used to remotely interview individuals who have tested positive. "We have not had a single person out there that said, 'That's scary, that's worrisome,'" O'Neal said. "We don't just walk around and arbitrarily scan people." Police use of such robots is still rare and largely untested - and hasn't always gone over well with the public. Honolulu officials faced a backlash when a local news organization, Honolulu Civil Beat, revealed that the Spot purchase was made with federal relief money . Late last year, the New York Police Department starting using Spot after painting it blue and renaming it "Digidog." It went mostly unnoticed until New Yorkers starting spotting it in the wild and posting videos to social media. Spot quickly became a sensation, drawing a public outcry that led the police department to abruptly return Digidog to its maker. "This is some Robocop stuff, this is crazy," was the reaction in April from Democratic U.S. Rep. Jamaal Bowman. He was one of several New York politicians to speak out after a widely shared video showed the robot strutting with police officers responding to a domestic-violence report at a high-rise public housing building in Manhattan. Days later, after further scrutiny from elected city officials, the department said it was terminating its lease and returning the robot. The expensive machine arrived with little public notice or explanation, public officials said, and was deployed to already over-policed public housing. Use of the high-tech canine also clashed with Black Lives Matter calls to defund police operations and reinvest in other priorities. The company that makes the robots, Boston Dynamics, says it's learned from the New York fiasco and is trying to do a better job of explaining to the public - and its customers - what Spot can and cannot do. That's become increasingly important as Boston Dynamics becomes part of South Korean carmaker Hyundai Motor Company, which in June closed an $880 million deal for a controlling stake in the robotics firm. "One of the big challenges is accurately describing the state of the technology to people who have never had personal experience with it," Michael Perry, vice president of business development at Boston Dynamics, said in an interview. "Most people are applying notions from science fiction to what the robot's doing." For one of its customers, the Dutch national police, explaining the technology includes emphasizing that Spot is a very good robot - well-behaved and not so smart after all. "It doesn't think for itself," Marjolein Smit, director of the special operations unit of the Dutch national police, said of the remote-controlled robot. "If you tell it to go to the left, it will go to the left. If you tell it to stop, it will stop." Earlier this year, her police division sent its Spot into the site of a deadly drug lab explosion near the Belgian border to check for dangerous chemicals and other hazards. Perry said the company's acceptable use guidelines prohibit Spot's weaponization or anything that would violate privacy or civil rights laws, which he said puts the Honolulu police in the clear. It's all part of a year-long effort by Boston Dynamics, which for decades relied on military research grants , to make its robots seem friendlier and thus more palatable to local governments and consumer-oriented businesses. By contrast, a lesser-known rival, Philadelphia-based Ghost Robotics, has no qualms about weaponization and supplies its dog-like robots to several branches of the U.S. military and its allies. "It's just plug and play, anything you want," said Ghost Robotics CEO Jiren Parikh, who was critical of Boston Dynamics' stated ethical principles as "selective morality" because of the company's past involvement with the military. Parikh added that his company doesn't market its four-legged robots to police departments, though he said it would make sense for police to use them. "It's basically a camera on a mobile device," he said. There are roughly 500 Spot robots now in the wild. Perry said they're commonly used by utility companies to inspect high-voltage zones and other hazardous areas. Spot is also used to monitor construction sites, mines and factories, equipped with whatever sensor is needed for the job. It's still mostly controlled by humans, though all they have to do is tell it which direction to go and it can intuitively climb stairs or cross over rough terrain. It can also operate autonomously, but only if it's already memorized an assigned route and there aren't too many surprise obstacles. "The first value that most people see in the robot is taking a person out of a hazardous situation," Perry said. Kim, of the ACLU in Hawaii, acknowledged that there might be many legitimate uses for such machines, but said opening the door for police robots that interact with people is probably not a good idea. He pointed to how Dallas police in 2016 stuck explosives on a wheeled robot to kill a sniper, fueling an ongoing debate about "killer robots" in policing and warfighting. "There's the potential for these robots to increase the militarization of police departments and use it in ways that are unacceptable," Kim said. "Maybe it's not something we even want to let law enforcement have." - - AP Technology Writer Matt O'Brien reported from Providence, Rhode Island.
Police departments claim to use robotic dogs as simply another tool to keep emergency responders out of danger, but privacy advocates say the robots are secretly being deployed without safeguards against aggressive, invasive, or dehumanizing uses. The New York Police Department acquired a Spot robotic canine last year from robotics developer Boston Dynamics, but returned it when videos of the robot in the wild sparked a public outcry. Boston Dynamics' Michael Perry said weaponizing Spot or using it to violate privacy or civil rights laws is prohibited, but rival robot-maker Ghost Robotics has no such restrictions. The Hawaii American Civil Liberties Union's Jongwook Kim said, "There's the potential for these robots to increase the militarization of police departments and use it in ways that are unacceptable."
[]
[]
[]
scitechnews
None
None
None
None
Police departments claim to use robotic dogs as simply another tool to keep emergency responders out of danger, but privacy advocates say the robots are secretly being deployed without safeguards against aggressive, invasive, or dehumanizing uses. The New York Police Department acquired a Spot robotic canine last year from robotics developer Boston Dynamics, but returned it when videos of the robot in the wild sparked a public outcry. Boston Dynamics' Michael Perry said weaponizing Spot or using it to violate privacy or civil rights laws is prohibited, but rival robot-maker Ghost Robotics has no such restrictions. The Hawaii American Civil Liberties Union's Jongwook Kim said, "There's the potential for these robots to increase the militarization of police departments and use it in ways that are unacceptable." HONOLULU (AP) - If you're homeless and looking for temporary shelter in Hawaii's capital, expect a visit from a robotic police dog that will scan your eye to make sure you don't have a fever. That's just one of the ways public safety agencies are starting to use Spot, the best-known of a new commercial category of robots that trot around with animal-like agility. The handful of police officials experimenting with the four-legged machines say they're just another tool, like existing drones and simple wheeled robots, to keep emergency responders out of harm's way as they scout for dangers. But privacy watchdogs - the human kind - warn that police are secretly rushing to buy the robots without setting safeguards against aggressive, invasive or dehumanizing uses. In Honolulu, the police department spent about $150,000 in federal pandemic relief money to buy their Spot from robotics firm Boston Dynamics for use at a government-run tent city near the airport. "Because these people are houseless it's considered OK to do that," said Jongwook Kim, legal director at the American Civil Liberties Union of Hawaii. "At some point it will come out again for some different use after the pandemic is over." Acting Lt. Joseph O'Neal of the Honolulu Police Department's community outreach unit defended the robot's use in a media demonstration earlier this year. He said it has protected officers, shelter staff and residents by scanning body temperatures between meal times at a shelter where homeless people could quarantine and get tested for COVID-19. The robot is also used to remotely interview individuals who have tested positive. "We have not had a single person out there that said, 'That's scary, that's worrisome,'" O'Neal said. "We don't just walk around and arbitrarily scan people." Police use of such robots is still rare and largely untested - and hasn't always gone over well with the public. Honolulu officials faced a backlash when a local news organization, Honolulu Civil Beat, revealed that the Spot purchase was made with federal relief money . Late last year, the New York Police Department starting using Spot after painting it blue and renaming it "Digidog." It went mostly unnoticed until New Yorkers starting spotting it in the wild and posting videos to social media. Spot quickly became a sensation, drawing a public outcry that led the police department to abruptly return Digidog to its maker. "This is some Robocop stuff, this is crazy," was the reaction in April from Democratic U.S. Rep. Jamaal Bowman. He was one of several New York politicians to speak out after a widely shared video showed the robot strutting with police officers responding to a domestic-violence report at a high-rise public housing building in Manhattan. Days later, after further scrutiny from elected city officials, the department said it was terminating its lease and returning the robot. The expensive machine arrived with little public notice or explanation, public officials said, and was deployed to already over-policed public housing. Use of the high-tech canine also clashed with Black Lives Matter calls to defund police operations and reinvest in other priorities. The company that makes the robots, Boston Dynamics, says it's learned from the New York fiasco and is trying to do a better job of explaining to the public - and its customers - what Spot can and cannot do. That's become increasingly important as Boston Dynamics becomes part of South Korean carmaker Hyundai Motor Company, which in June closed an $880 million deal for a controlling stake in the robotics firm. "One of the big challenges is accurately describing the state of the technology to people who have never had personal experience with it," Michael Perry, vice president of business development at Boston Dynamics, said in an interview. "Most people are applying notions from science fiction to what the robot's doing." For one of its customers, the Dutch national police, explaining the technology includes emphasizing that Spot is a very good robot - well-behaved and not so smart after all. "It doesn't think for itself," Marjolein Smit, director of the special operations unit of the Dutch national police, said of the remote-controlled robot. "If you tell it to go to the left, it will go to the left. If you tell it to stop, it will stop." Earlier this year, her police division sent its Spot into the site of a deadly drug lab explosion near the Belgian border to check for dangerous chemicals and other hazards. Perry said the company's acceptable use guidelines prohibit Spot's weaponization or anything that would violate privacy or civil rights laws, which he said puts the Honolulu police in the clear. It's all part of a year-long effort by Boston Dynamics, which for decades relied on military research grants , to make its robots seem friendlier and thus more palatable to local governments and consumer-oriented businesses. By contrast, a lesser-known rival, Philadelphia-based Ghost Robotics, has no qualms about weaponization and supplies its dog-like robots to several branches of the U.S. military and its allies. "It's just plug and play, anything you want," said Ghost Robotics CEO Jiren Parikh, who was critical of Boston Dynamics' stated ethical principles as "selective morality" because of the company's past involvement with the military. Parikh added that his company doesn't market its four-legged robots to police departments, though he said it would make sense for police to use them. "It's basically a camera on a mobile device," he said. There are roughly 500 Spot robots now in the wild. Perry said they're commonly used by utility companies to inspect high-voltage zones and other hazardous areas. Spot is also used to monitor construction sites, mines and factories, equipped with whatever sensor is needed for the job. It's still mostly controlled by humans, though all they have to do is tell it which direction to go and it can intuitively climb stairs or cross over rough terrain. It can also operate autonomously, but only if it's already memorized an assigned route and there aren't too many surprise obstacles. "The first value that most people see in the robot is taking a person out of a hazardous situation," Perry said. Kim, of the ACLU in Hawaii, acknowledged that there might be many legitimate uses for such machines, but said opening the door for police robots that interact with people is probably not a good idea. He pointed to how Dallas police in 2016 stuck explosives on a wheeled robot to kill a sniper, fueling an ongoing debate about "killer robots" in policing and warfighting. "There's the potential for these robots to increase the militarization of police departments and use it in ways that are unacceptable," Kim said. "Maybe it's not something we even want to let law enforcement have." - - AP Technology Writer Matt O'Brien reported from Providence, Rhode Island.
26
EU Fines Amazon Record $888 Million Over Data Violations
Luxembourg's CNPD data protection authority fined Amazon a record $888 million for breaching the EU's General Data Protection Regulation (GDPR). The EU regulator charged the online retailer with processing personal data in violation of GDPR rules, which Amazon denies. The ruling closes an investigation triggered by a 2018 complaint from French privacy rights group La Quadrature du Net. Amazon says it gathers data to augment the customer experience, and its guidelines restrict what employees can do with it; some lawmakers and regulators allege the company exploits this data to gain an unfair competitive advantage. Amazon also is under EU scrutiny concerning its use of data from sellers on its platform, and whether it unfairly champions its own products.
[]
[]
[]
scitechnews
None
None
None
None
Luxembourg's CNPD data protection authority fined Amazon a record $888 million for breaching the EU's General Data Protection Regulation (GDPR). The EU regulator charged the online retailer with processing personal data in violation of GDPR rules, which Amazon denies. The ruling closes an investigation triggered by a 2018 complaint from French privacy rights group La Quadrature du Net. Amazon says it gathers data to augment the customer experience, and its guidelines restrict what employees can do with it; some lawmakers and regulators allege the company exploits this data to gain an unfair competitive advantage. Amazon also is under EU scrutiny concerning its use of data from sellers on its platform, and whether it unfairly champions its own products.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
76
Edit dataset card