High-performance computing (HPC) is becoming ever more important in the age of machine learning and artificial intelligence, which depends on an incredible amount of number crunching. We have moved from training ML models on CPUs to GPUs (graphics processing units with thousands of computational cores), and more recently, to ASICs (application-specific integrated circuits) that perform simple computations at speeds of six orders of magnitude faster than CPUs. These chips must be carefully arranged to minimize data transmission bottlenecks and create large compute clusters that can handle the most demanding learning tasks. Popular streaming services are also using ever more cloud computing services to perform video transcoding and provide streaming.
Dr. Jonyer created the largest computer cluster at Oklahoma State University for his machine learning research program, using the latest technologies. Dr. Jonyer consulted on product management to the high-performance computing infrastructure provider, Rescale, where he mapped out the HPC space and provided product direction using customer feedback.
High-performance computing has been changing its characted over the years, starting with the many-CPU vector computers, then tranformed into a prefernce for cluster computing. With the advent of GPUs, we now have small vector-computers inside our servers, and these are now getting outmoded by FPGAs and ASICs as certain applications, like machine learning, graphics processing, and crypto mining are becoming so prominent that specialized hardware is economical.
With new innovation come claims of patent infringement and trade secret misapppropriation. We hepl both plaintiffs and defendants in claim construction, code inspection, drafting declarations, and testifying in deposition and trial.
Accelerated Machine Learning Inference
An early employee of a startup joined an established technology company and was accused of trade secret misappropriation of a high-performance machine learning algorithm. We were enlisted to investigate whether the technology might be used by the big company, and if so, in what types of use cases, and to what effect. We evaluated the state of the art in machine learning training and inference in the context of the big company’s applications and use cases. We reviewed and interpreted many publications by the big company to arrive at an actionable opinion.