Computing

  • Keyword Search

  • Post Date Range

ohm's law

Paying Ohmage

By Bryon Moyer | August 21, 2020 | 0 Comments

[From the last episode: We looked at some ways of optimizing neural-network so that they run better at the edge.] We’re going to cover one more interesting development in the world of AI, but in order for it to make any sense, we’re going to have to start by covering […]

Read More
model pruning

Making Models Smaller

By Bryon Moyer | August 14, 2020 | 0 Comments

[From the last episode: We looked at what it means to do machine learning “at the edge,” and some of the compromises that must be made.] When doing ML at the edge, we want two things: less computing (for speed and, especially, for energy) and smaller hardware that requires less […]

Read More

Inference at the Edge

By Bryon Moyer | August 7, 2020 | 0 Comments

[From the last episode: We looked at activation and what they’re for.] We’ve talked about the structure of machine-learning (ML) models and much of the hardware and math needed to do ML work. But there are some practical considerations that mean we may not directly use the pristine model as […]

Read More
activation function

Activation Functions

By Bryon Moyer | July 31, 2020 | 0 Comments

[From the last episode: We looked at the notion of sparsity and how it helps with the math.] We saw before that there are three main elements in a CNN: the convolution, the pooling, and the activation . Today we focus on activation . I’ll start by saying that the […]

Read More
sparsity

The Benefits of Sparsity

By Bryon Moyer | July 24, 2020 | 0 Comments

[From the last episode: We looked at the reason for doing all the multiply-accumulate math with machine learning.] Before we leave this matrix multiplication thing, there’s one other notion floating about a lot these days. Anyone listening to conversations about how to make machine-learning (ML) algorithms faster will eventually hear […]

Read More
Site by Consistent Image Web Design