CrowdTangle opens public application for academics

Naomi Shiffman runs Academic & Research partnerships at CrowdTangle. Brandon Silverman is the CEO of CrowdTangle.

Supporting independent research through data access, training, and resources is critical to understanding the spread of public content across social media and to providing transparency into Facebook’s platforms. That’s why we’re excited to announce that CrowdTangle has now opened a public application for university-based researchers and academics.

CrowdTangle is a public insights tool from Facebook that makes it easy to follow, analyze, and report on what’s happening across social media. CrowdTangle started a pilot program in 2019 to partner with researchers and academics and help them study critical topics such as racial justice, misinformation, and elections. In addition to launching an online application, we’ve built a new hub with information about all Facebook data sets that are available for independent research.

CrowdTangle’s public application for researchers

Over the past year, CrowdTangle has been expanding its program for academics and researchers in beta. In that time, CrowdTangle has been used by researchers to study everything from Russian-linked influence operations in Africa to the spread of misinformation in European elections. Over 250 research teams at universities across the globe currently use the tool to support their research work, and more than 50 research publications have cited CrowdTangle data in the past year. The Stanford Internet Observatory recently shared how their work using CrowdTangle helped lead to a takedown of 76 accounts across 6 countries.

Now, we’re making more of this type of work possible by opening applications to university-based researchers at the faculty, PhD, or post-doctoral level, and who are focused on misinformation, elections, COVID-19, racial justice, and well-being. If accepted, researchers will receive access to all of CrowdTangle’s tools and API, as well as training and resources to support their research.

Apply for access

“CrowdTangle gave us a much more in-depth understanding of the content we were researching,” says Fabio Giglietto, Associate Professor, University of Urbino. “Thanks to CrowdTangle, we now know a lot more about coordinated inauthentic link sharing strategies, and have even more new hypotheses that we want to test.”

“In our digitally connected, ‘permanently online’ world, understanding the modern world requires a thorough academic examination of digital communication spheres,” says Lena Frischlich, University of Munster. “CrowdTangle offers user-friendly ways for studying public communication relatively fast, while, at the same time, providing customization options that allow for theory-driven academic research. Plus, the support team is really great.”

Learn more about CrowdTangle’s work with academics and researchers here.

Researchers can now find all data available to them in one centralized hub

We’ve also published a new hub where independent researchers can access all available Facebook data sets across CrowdTangle, the Ad Library, the Facebook Open Research Tool (FORT), and Data for Good. The hub also includes details on what data are available in each set and how to access them, so researchers can select the right fit for their work quickly and efficiently.

We’re excited to provide these new resources to the research community and help provide more transparency into the spread of public content across Facebook’s platforms.

The post CrowdTangle opens public application for academics appeared first on Facebook Research.

Read More

Announcing the winners of the 2020 AI System Hardware/Software Co-Design request for proposals

In March 2020, Facebook launched the AI Systems Hardware/Software Co-Design request for proposals (RFP) at MLSys. This new research award opportunity is part of our continued goal of strengthening our ties with academics working in the wide range of AI hardware/algorithm co-design research. Today, we’re announcing the recipients of these research awards.
View RFPWe launched this RFP after the success of the 2019 RFP and the AI Systems Faculty Summit. This year, we were particularly interested in proposals related to any of the following areas:

  • Recommendation models
    • Compression, quantization, pruning techniques
    • Graph-based systems with implications on hardware (graph learning)
  • Hardware/software co-design for deep learning
    • Energy-efficient hardware architectures
    • Hardware efficiency–aware neural architecture search
    • Mixed-precision linear algebra and tensor-based frameworks
  • Distributed training
    • Software frameworks for efficient use of programmable hardware
    • Scalable communication-aware and data movement-aware algorithms
    • High-performance and fault-tolerant communication middleware
    • High-performance fabric topology and network transport for distributed training
  • Performance, programmability, and efficiency at data center scale
    • Machine learning–driven data access optimization (e.g., prefetching and caching)
    • Enabling large model deployment through intelligent memory and storage
    • Training un/self/semi-supervised models on large-scale video data sets

“We received 132 proposals from 74 universities, which was an increase from last year’s 88 proposals. It was a difficult task to select a few research awards from a large pool of high-quality proposals,” says Maxim Naumov, a Research Scientist working on AI system co-design at Facebook. “We believe that the winners will help advance the state-of-the-art in ML/DL system design. Thank you to all the researchers who took the time to submit a proposal, and congratulations to the award recipients.”

Research award recipients

Principal investigators are listed first unless otherwise noted.

Algorithm-systems co-optimization for near-data graph learning
Zhiru Zhang (Cornell University)

Analytical models for efficient data orchestration in DL workloads
Tze Meng Low (Carnegie Mellon University)

Efficient DNN training at scale: From algorithms to hardware
Gennady Pekhimenko (University of Toronto)

HW/SW co-design for real-time learning with memory augmented networks
Priyanka Raina, Burak Bartan, Haitong Li, and Mert Pilanci (Stanford University)

HW/SW co-design of next-generation training platforms for DLRMs
Tushar Krishna (Georgia Institute of Technology)

ML-driven hardware-software co-design for data access optimization
Sophia Shao and Seah Kim (University of California, Berkeley)

Rank-adaptive and low-precision tensorized training for DLRM
Zheng Zhang (University of California, Santa Barbara)

Scaling and accelerating distributed training with FlexFlow
Alexander Aiken (Stanford University)

Unsupervised training for large-scale video representation learning
Avideh Zakhor (University of California, Berkeley)

The post Announcing the winners of the 2020 AI System Hardware/Software Co-Design request for proposals appeared first on Facebook Research.

Read More

Announcing the winners of the 2020 request for proposals in applied statistics

In February 2020, Facebook launched the Statistics for Improving Insights, Models, and Decisions request for proposals (RFP). This RFP was designed to support research that addresses challenges in applied statistics that have direct applications for producing more effective insights and decisions for data scientists and researchers. Today, we’re announcing the recipients of these research awards.

View RFP

This program is a follow-up of the 2019 Statistics for Improving Insights and Decisions RFP, led by Facebook research teams working in Infra Data Science and Core Data Science. This year, we were particularly interested in the following topics:

  • Learning and evaluation under uncertainty
  • Statistical models of complex social processes
  • Causal inference with observational data
  • Efficient sampling and prevalence measurement
  • Design and analysis of experiments
  • Anomaly detection
  • Interpretability techniques for AI models

For descriptions of each topic, see the RFP application page.

“We are committed to enabling people to build safe and meaningful communities,” says Aude Hofleitner, Core Data Science Research Scientist Manager at Facebook. “This requires us to constantly innovate and push the state of the art of robust scientific methodologies. This commitment becomes all the more important in challenging economic and social times. We are looking forward to continuing to strengthen our engagements with the academic community and support research on these critical problems.”

“Facebook operates one of the largest and most sophisticated infrastructure among tech companies in the world,” says Xin Fu, Director of Research Data Science at Facebook. “We are excited about this opportunity to foster further innovation in research on statistical methodologies that can help improve the efficiency, reliability, and performance of large-scale infrastructure, from the detection of anomalies in services to advanced AI model interpretation techniques.”

We received 154 proposals from more than 107 universities. Thank you to all the researchers who took the time to submit a proposal, and congratulations to the award recipients.

Research award recipients

Adversarially robust temporal embedding models for social media integrity
Srijan Kumar and Duen Horng “Polo” Chau (Georgia Tech Research Corporation)

Learning from comparisons
Stratis Ioannidis, Deniz Erdogmus, and Jennifer Dy (Northeastern University)

Persistent activity mining in continually evolving networks
Danai Koutra (University of Michigan)

Personalized explanation of recommendations via natural language generation
Julian McAuley (University of California, San Diego)

Running experiments with unobservable outcomes: An invariant perspective
Andrea Montanari (Stanford University)

Towards transfer causal learning for average treatment effects
Bin Yu (University of California, Berkeley)

The post Announcing the winners of the 2020 request for proposals in applied statistics appeared first on Facebook Research.

Read More

Research shows news articles following suicide prevention best practices get more engagement on Facebook

A new study published today in the journal Proceedings of the National Academy of Sciences examines how news stories on Facebook adhere to suicide reporting guidelines and their engagement on the platform. Led by CDC researchers with support of Facebook researchers, the report is part of the CDC’s work to understand the impact of safe suicide-reporting on social media.

Key findings include the following:

  • More than half (60%) of the most-shared news articles about suicide did not include any protective information, such as a suicide prevention helpline or public health resources.
  • The majority of articles included harmful elements that go against suicide reporting guidelines, such as explicitly reporting the name of the person who died (60%), featuring the word “suicide” prominently in the headline (59%), and publicizing details about the location (55%) or method (50%).
  • When news articles followed more of the suicide prevention guidelines, they got more engagement on Facebook. Each additional guideline followed is associated with a 19% increase in the odds of an article being reshared.

Leading suicide prevention organizations and health authorities such as the World Health Organization (WHO) and the U.S. Centers for Disease Control and Prevention (CDC) have created guidelines for news organizations to cover suicide more responsibly, such as those presented on reportingonsuicide.org. Their goal is to reduce sensationalism around suicide and prevent exposure to content that may trigger vulnerable people, such as providing a description of the method used or the location where it took place. The guidelines also recommend including resources for people in crisis, such as suicide helplines. See the full report here.

The post Research shows news articles following suicide prevention best practices get more engagement on Facebook appeared first on Facebook Research.

Read More

Promoting AI ethics research in Latin America and the Caribbean

Norberto Andrade is the Global Policy Lead for Digital and AI Ethics.

It’s becoming more and more difficult to imagine a world without AI. As a general-purpose, ubiquitous technology, AI is becoming further embedded into our daily lives, making and supporting decisions that directly affect all of us. From helping us get through traffic to determining whether we get a loan or the opportunity to interview for a job, AI is everywhere and makes an impact on everyone.

As we increasingly delegate to machines those tasks and decisions that used to be done and made by humans, we’re passing on to them the values and norms behind them. The challenges derived from the technical and moral agency that machines are being called to exercise have sparked an explosive interest in AI ethics and governance.

Nonetheless, and despite AI being everywhere and affecting everyone, proposals on how to ethically govern this technology are predominantly coming from a handful of countries and regions in the world. In a recent study that looked at the geographic distribution of institutions that released ethical AI guidelines, the overwhelming majority of such proposals came from the United States and the European Union.

When we look at the scientific research being done to advance the field of AI, the picture is very similar. Most research publications and academic-corporate collaborations in the field of AI are from Europe, North America, and the East Asia and Pacific region. Moreover, such research is predominantly technical in nature — in other words, it is generally detached from social science disciplines.

While the impact of AI is global, its debate has been dominated by a very restricted set of actors. And while the effects and implications of AI are diverse, its study and research are increasingly focused on technical aspects.

Introducing GuIA.ai

To promote novel and different perspectives on AI, expanding its debate to other regions and topics, Facebook joined forces with the Centro de Estudios en Tecnología y Sociedad (CETyS) and the Inter-American Development Bank to support independent research on AI ethics, which culminated in the launch of GuIA.ai.

This project, which gathered 19 researchers and academic experts from Latin America and the Caribbean, resulted in the publication of eight articles and a handbook that guides readers through the issues of ethics, regulation, and the public policy environment for the development and adoption of AI. The academic papers — which touch upon topics as diversified as the impact of AI in the public sector; AI fairness and its legal ramifications; AI national strategies in the region; and country-specific perspectives on social justice and human rights, among others — can be found here.

As opposed to applying values, concepts, and perspectives from other regions without taking into account regional perspectives and particularities, this project advances the study of AI ethics by leveraging and adapting it to the local context and knowledge of underrepresented regions, namely in Latin America and the Caribbean. GuIA.ai plugged into the talent of researchers from the region and provided them with a platform to raise their voice and add their input to this debate.

GuIA.ai brings a fresh and different perspective to the forefront of AI ethics, incorporating and disseminating new AI governance approaches, AI ethical toolkits, and localized takes on issues of fairness and inequality from various regional cultures and traditions. Moreover, this project has paved the way for the launch of similar AI ethics research initiatives from Facebook in India, the Asia Pacific region, and Africa.

It is our hope that this network of researchers continues to expand, enrich, and diversify the study of AI ethics while contributing to comparative projects among researchers from these different regions.

The post Promoting AI ethics research in Latin America and the Caribbean appeared first on Facebook Research.

Read More

Introducing neural supersampling for real-time rendering

Introducing neural supersampling for real-time rendering

Real-time rendering in virtual reality presents a unique set of challenges — chief among them being the need to support photorealistic effects, achieve higher resolutions, and reach higher refresh rates than ever before. To address this challenge, researchers at Facebook Reality Labs developed DeepFocus, a rendering system we introduced in December 2018 that uses AI to create ultra-realistic visuals in varifocal headsets. This year at the virtual SIGGRAPH Conference, we’re introducing the next chapter of this work, which unlocks a new milestone on our path to create future high-fidelity displays for VR.

Our SIGGRAPH technical paper, entitled “Neural Supersampling for Real-time Rendering,” introduces a machine learning approach that converts low-resolution input images to high-resolution outputs for real-time rendering. This upsampling process uses neural networks, training on the scene statistics, to restore sharp details while saving the computational overhead of rendering these details directly in real-time applications.

Our approach is the first learned supersampling method that achieves significant 16x supersampling of rendered content with high spatial and temporal fidelity, outperforming prior work by a large margin.

Animation comparing the rendered low-resolution color input to the 16x supersampling output produced by the introduced neural supersampling method.

 

What’s the research about?

To reduce the rendering cost for high-resolution displays, our method works from an input image that has 16 times fewer pixels than the desired output. For example, if the target display has a resolution of 3840×2160, then our network starts with a 960×540 input image rendered by game engines, and upsamples it to the target display resolution as a post-process in real-time.

While there has been a tremendous amount of research on learned upsampling for photographic images, none of it speaks directly to the unique needs of rendered content such as images produced by video game engines. This is due to the fundamental differences in image formation between rendered and photographic images. In real-time rendering, each sample is a point in both space and time. That is why the rendered content is typically highly aliased, producing jagged lines and other sampling artifacts seen in the low-resolution input examples in this post. This makes upsampling for rendered content both an antialiasing and interpolation problem, in contrast to the denoising and deblurring problem that is well-studied in existing superresolution research by the computer vision community. The fact that the input images are highly aliased and that information is completely missing at the pixels to be interpolated presents significant challenges for producing high-fidelity and temporally-coherent reconstruction for rendered content.

Example rendering attributes used as input to the neural supersampling method — color, depth, and dense motion vectors — rendered at a low resolution.

On the other hand, in real-time rendering, we can have more than the color imagery produced by a camera. As we showed in DeepFocus, modern rendering engines also give auxiliary information, such as depth values. We observed that, for neural supersampling, the additional auxiliary information provided by motion vectors proved particularly impactful. The motion vectors define geometric correspondences between pixels in sequential frames. In other words, each motion vector points to a subpixel location where a surface point visible in one frame could have appeared in the previous frame. These values are normally estimated by computer vision methods for photographic images, but such optical flow estimation algorithms are prone to errors. In contrast, the rendering engine can produce dense motion vectors directly, thereby giving a reliable, rich input for neural supersampling applied to rendered content.

Our method is built upon the above observations, and combines the additional auxiliary information with a novel spatio-temporal neural network design that is aimed at maximizing the image and video quality while delivering real-time performance.

At inference time, our neural network takes as input the rendering attributes (color, depth map and dense motion vectors per frame) of both current and multiple previous frames, rendered at a low resolution. The output of the network is a high-resolution color image corresponding to the current frame. The network is trained with supervised learning. At training time, a reference image that is rendered at the high resolution with anti-aliasing methods, paired with each low-resolution input frame, is provided as the target image for training optimization.

Example results. From top to bottom shows the rendered low-resolution color input, the 16x supersampling result by the introduced method, and the target high-resolution image rendered offline.

 

 

Example results. From top to bottom shows the rendered low-resolution color input, the 16x supersampling result by the introduced method, and the target high-resolution image rendered offline.

 

 

Example results. From left to right shows the rendered low-resolution color input, the 16x supersampling result by the introduced method, and the target high-resolution image rendered offline.

 

 

What’s next?

Neural rendering has great potential for AR/VR. While the problem is challenging, we would like to encourage more researchers to work on this topic. As AR/VR displays reach toward higher resolutions, faster frame rates, and enhanced photorealism, neural supersampling methods may be key for reproducing sharp details by inferring them from scene data, rather than directly rendering them. This work points toward a future for high-resolution VR that isn’t just about the displays, but also the algorithms required to practically drive them.

Read the full paper: Neural Supersampling for Real-time Rendering, Lei Xiao, Salah Nouri, Matt Chapman, Alexander Fix, Douglas Lanman, Anton Kaplanyan, ACM SIGGRAPH 2020.

The post Introducing neural supersampling for real-time rendering appeared first on Facebook Research.

Read More