Let's transform your signal acquisition hardware into a cloud-managed edge lambda service

Providing extensible event handling for latency-sensitive hardware signals can be challenging. Our Rust-based software agent's embedded Lambda service runs low-latency Lua and WASM lambdas, enabling intelligent inference at the source of the signal.

Let's add extensibility to your specialty hardware with a cloud-managed edge lambda service

Providing extensible event handling for latency-sensitive hardware signals can be challenging. Our Rust-based software agent embeds a Lambda service that runs low-latency Lua and WASM lambdas, enabling inference to inform control systems right at the source of the signal.

Proudly Serviced

Here’s How We Can Help

Our Rust-based edge software agent supports either cloud-managed over-the-air deployments, or a factory-set sealed configuration bundle.

Kick the tires by quickly developing lambdas in Lua that directly respond to hardware signals, right from our cloud console.

Then develop beefier lambdas in Rust or Go that compile to WASM, for more complex workloads such as inference powered by ML or AI.

Cloud Console

Develop lambdas in our collaborative web console

Lambdas

Run logic that responds to events, right at the source of the signal

Lambda Triggers

Write logic that generates the events that Lambdas react to

Rust Ecosystem

Leverage inference crates for Onnx, Tensorflow, and more

High Performance

Our agent is Rust based to help your intelligence keep up with your signals

How We Help You Get Started

STEP ONE

Inquiry Form

Click Get Started to fill out an inquiry form describing your goals.

STEP TWO

Schedule a Call

See a demo, and explore options ranging from self-service to paid pilot.

STEP THREE

Install the Agent

Start building your first Lambda that deploys to your on-premise agent.

Edge Intelligence Enabler

Hi I’m David, CEO of On Prem

As a seasoned software professional with decades of start-up and Fortune 500 experience, I know analytics.

In the cloud, analytics is performed on big servers using distributed clusters of Apache Spark or DataFusion. I worked on several of those, including the Spark Service at IBM Analytics, and the GPU-accelerated Theseus SQL processor at Voltron Data.

Intelligence in the cloud is fed STALE DATA that has to be brought in from the edge, sometimes at great expense.

Fast forward to today. Inference is moving closer to the source of the signal, where insights can inform control systems without requiring a slow, expensive, or unreliable connection to the cloud.

On Prem focuses on Lua and WASM, two runtimes that are proven to deploy reliably over the air, or be embedded into firmware, and are uniquely able to orchestrate performance-sensitive code.

Frequently Asked Questions (FAQs)

Some of our most frequently-asked-questions:

Q. Why is your agent written in Rust, not Go?

I know of a Go-based software agent that tried to keep up with a 4-channel 24-bit A/D converter, and topped out at around 22,000 samples/sec (almost 20x too slow). That's when I learned that low-latency signal processing is at odds with the garbage collector languages.

Q. Why is your agent written in Rust, not C++?

Rust provides the agility required to execute projects swiftly, while also keeping the door open to targeting WASM or no-std embedded environments.

Q. How can Release Versioning be achieved with your SaaS?

Lambdas can be developed interactively via the cloud console, and targeted to run on a software agent running on your laptop, or specialty equipment in your test lab designated as a development environment.

Assets can then be exported as script files plus JSON or YAML metadata, and checked into Git.

Continuous Integration tests can rebuild your environment from scratch for each merge request, using our CLI to import your assets.

Q. What if I want to embed lambdas in my specialty hardware?

No problem. Agents can be configured during a "factory burn-in" process, and then the agent can be configured to disable "phoning home" to the cloud for reconfiguration via downloads of new sealed configuration bundles.

Q. What are some of your pricing options?

Getting started is free if you're in the mood for self-service. Just download the agent, log into the web console, and get started.

If you're panning to operationalize some Python code that is coming from a data scientist's notebook (necessitating its migration to Rust), or are planning self-service aPaaS capabilities for the customers of your specialty hardware, then you might consider a paid pilot, or a source code license of the On Prem software agent, or a white-labelled license of the On Prem Console.

Q. I'm new to WASM. What's its basic value proposition for inference?

The Rust ecosystem at crates.io contains a rich assortment of high-performance and memory-safe libraries, ranging from Onnx and Tensorflow runtimes, to image, video, and audio processing libraries. Rust can compile to WASM, and then On Prem is able to deploy it over the air to your edge software agent, or package it as firmware.

Let's get started adding cloud-managed self-service extensibility to your speciality hardware

Click the button below to get started

We add extensibility your specialty hardware with a cloud-managed edge lambda service.

Providing extensible event handling for latency-sensitive hardware signals can be challenging. Our Rust-based software agent's embedded Lambda service runs low-latency Lua and WASM lambdas, enabling intelligent inference right at the source of the signal.

© Copyright 2025 All Rights Reserved