Google

Corporation update 2022-01-28
local_activity

Upcoming Events

link

Resources

The People + AI Guidebook is a set of methods, best practices and examples for designing with AI.

Our recommendations are based on data and insights from over a hundred Googlers, industry experts, and academic research.

2023-02-14 08:16 Data Set Tools & Methods

What is Dataset Search?

Dataset Search is a search engine for datasets.

Using a simple keyword search, users can discover datasets hosted in thousands of repositories across the Web.

Our Mission

In addition to making datasets universally accessible and useful, Dataset Search's mission is to:

  • Foster a data sharing ecosystem that will encourage data publishers to follow best practices for data storage and publication
  • Give scientists a way to show the impact of their work through citation of datasets that they have produced

As more dataset repositories use schema.org and similar standards to describe their datasets, the variety and coverage of datasets that users find in Dataset Search will continue to grow.

2022-11-17 16:03 Content Research & Reports
We live in a world of great natural beauty — of majestic mountains, dramatic seascapes, and serene forests. Imagine seeing this beauty as a bird does, flying past richly detailed, three-dimensional landscapes. Can computers learn to synthesize this kind of visual experience? Such a capability would allow for new kinds of content for games and virtual reality experiences: for instance, relaxing within an immersive flythrough of an infinite nature scene. But existing methods that synthesize new views from images tend to allow for only limited camera motion.

In a research effort we call Infinite Nature, we show that computers can learn to generate such rich 3D experiences simply by viewing nature videos and photographs. Our latest work on this theme, InfiniteNature-Zero (presented at ECCV 2022) can produce high-resolution, high-quality flythroughs starting from a single seed image, using a system trained only on still photographs, a breakthrough capability not seen before. We call the underlying research problem perpetual view generation: given a single input view of a scene, how can we synthesize a photorealistic set of output views corresponding to an arbitrarily long, user-controlled 3D path through that scene? Perpetual view generation is very challenging because the system must generate new content on the other side of large landmarks (e.g., mountains), and render that new content with high realism and in high resolution.

Read more
2022-05-25 06:59 Weblink Research & Reports
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
2022-05-06 13:47 Weblink Articles & Editorial
Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformermodel trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.

Our Specialities

Information Technology
Infrastructure, Platform / Product
home_work

Child Organizations

DeepMind Corporation
Google Digitalakademin Corporation
share

Network

Organizations
Google Partner
Users
Daniel Lowgren Partner
Conny Svensson Customer / Client
Prakash Chandra Chhipa Fan
Lars Hermansson Customer / Client