Jinja Templating

Templates are an essential ingredient in full-stack web development. With Jinja, you can build rich templates that power the front end of your Python web applications. Jinja is a text templating language. It allows you to process a block of text, insert values from a context dictionary, control how the text flows using conditionals and loops, modify inserted data with filters, and compose different templates together using inheritance and inclusion. In this video course, you’ll learn how to: Install the […]

Read more

Unlocking the future of computing: The Analog Iterative Machine’s lightning-fast approach to optimization 

Picture a world where computing is not limited by the binary confines of zeros and ones, but instead, is free to explore the vast possibilities of continuous value data. Over the past three years a team of Microsoft researchers has been developing a new kind of analog optical computer that uses photons and electrons to process continuous value data, unlike today’s digital computers that use transistors to crunch through binary data. This innovative new machine has the potential to surpass […]

Read more

How to Flatten a List of Lists in Python

Sometimes, when you’re working with data, you may have the data as a list of nested lists. A common operation is to flatten this data into a one-dimensional list in Python. Flattening a list involves converting a multidimensional list, such as a matrix, into a one-dimensional list. To better illustrate what it means to flatten a list, say that you have the following matrix of numeric values: >>> >>> matrix = [ … [9, 3, 8, 3], … [4, 5, […]

Read more

Research Focus: Week of June 19, 2023

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. In this article NEW RESOURCE Responsible AI Maturity Model As the use of AI continues to surge, new government regulations are expected. But the organizations that build and use AI  

Read more

DeepSpeed ZeRO++: A leap in speed for LLM and chat model training with 4X less communication

Figure 1: Picture of ZeRO++ project highlights. Left top subfigure shows ZeRO++ reduce communication volume by 4x compared with ZeRO stage 3. Right top subfigure shows ZeRO++ performance on RLHF model training, where ZeRO++ achieves 1.3x speedup for RLHF training and 2.x speedup for token generation. Large AI models are transforming the digital world. Generative language models like Turing-NLG, ChatGPT, and GPT-4, powered by large language models (LLMs), are incredibly versatile, capable of performing tasks like summarization, coding, and translation. […]

Read more

Collaborators: Renewable energy storage with Bichlien Nguyen and David Kwabi

Today I’m talking to Dr. Bichlien Nguyen, a Principal Researcher at Microsoft Research, and Dr. David Kwabi, an Assistant Professor of Mechanical Engineering at the University of Michigan. Bichlien and David are collaborating on a fascinating project under the umbrella of the Microsoft Climate Research Initiative that brings organic chemistry and machine learning together to discover new forms of renewable energy storage. Before we unpack the “computational design and characterization of organic electrolytes for flow batteries and carbon capture,” let’s […]

Read more

Python’s Self Type: How to Annotate Methods That Return self

Have you ever found yourself lost in a big repository of Python code, struggling to keep track of the intended types of variables? Without the proper use of type hints and annotations, uncovering variable types can become a tedious and time-consuming task. Perhaps you’re an avid user of type hints but aren’t sure how to annotate methods that return self or other instances of the class itself. That’s the issue that you’ll tackle in this tutorial. First, though, you’ll need […]

Read more
1 141 142 143 144 145 989