Skip to main content

README

@auvious/media-tools / Exports

Media Tools

Process media streams on the browser.

Blur

The BlurFilterV1 is using the tensorflow BodyPix model, the documentation of which can be found on github.

The BlurFilterV2 is using the (Apache licensed) Google Meet models.

Build and publish

  1. npm install
  2. npm run build
  3. npm publish

Demo

In the demo folder, a demo project can be found, suitable to find optimal settings for the model on various devices. To setup do:

  1. npm install
  2. npm run serve
  3. Open browser at localhost

The blur amount and segmentation settings forms are applied at each frame, while the model config downloads new model for different values. Size range can approximately be between 1 - 14 MiB.

Usage Blur Filter

Import the BodyFilter and create new class. It expects a video element with the input, which should be hidden in the DOM, and a canvas element where the output is rendered. If canvas is not in the DOM, firefox will raise an error.

Then await bf.load(modelConfig?), which accepts a model config, with low settings but fast as default value. The values of public objects bf.segConfig and bf.blurAmount can be directly set.

A mediastream can be extracted with bf.getMediaStream() or just use canvas.captureStream() directly where needed. Support for fps counter are also present, set onfps function to receive such events every second.

Also important is to call bf.unload() when blur filter is no longer used, as to release memory.

Usage Background Filter

Almost the same as v1, with some small differences.

import { BackgroundFilter } from "@auvious/media-tools";

instance = new BackgroundFilter();

instance.onfps = (f) => (this.fps = f);
instace.onfail = (error) => console.error(error.type, error._error);

hasLoaded = await instance.load({ assetsPath: location.origin + "/assets" });

const mediastream = instance.play();

// may trigger onfail
await instance.setImage(imageRef);

instance.unsetImage(); // back to blur

instance.unload();

.load takes argument an object with these properties:

  • assetsPath, the absolute url of models and wasm
  • lite?, whether to load small or full model (default: full)
  • image?, HTMLImageElement, to use instead of blur
  • useWebgl?

You can also check if the model is supported with filter.isSupported()

Event listeners

  • fps(number), fps updates
  • fail, called when
    • any setup or render fail, that is, blur is unsupported
    • image set for background cannot be used
  • output, mediastream output

Object Tracking

Track multiple points in a video stream.

import { Tracker } from '@auvious/media-tools`;

const tracker = new Tracker(videoElement);

tracker.onupdate = (id, x, y) => {
pointers[id].css.top = x + 'px';
pointers[id].css.left = y + 'px';
}

// track at position (X 100, Y 200) a box of radius 30 (circle fits in box)
const id = tracker.track(100, 200, 30);

tracker.untrack(id);

tracker.destroy();

MediaDevices

With singleton MediaDevices you can get:

  • added/updated/removed MediaDeviceInfo events, depending on browser support
  • permissions updates, if browser supports it
  • set base constraints to be merged with getUserMedia
  • devices grouped by ids, kinds, groups and getters
  • save preferred devices
import { MediaDevices } from "@auvious/media-tools";

// set event handlers first
MediaDevices.events.subscribe({
// added/update/removed let you know if you can use the device to request media from
added(devices) {
console.log(...devices);
},
updated(devices) {
// rarely some devices change label
},
removed(devices) {
// devices that user has removed, may be triggered with delay due to browser implementation
},
permissions(perms) {
// emitter every time permissions change, starts with default all false
console.log(perms.video);
},
});

// first time setup, only takes effect the first time called, otherwise ignored
await MediaDevices.setup();

// now you know
console.log(MediaDevives.has.audioinput);

// save preferred devices to locale storage, use undefined to reset
MediaDevices.savePreferred({ audioinput: "13ba34...", audiooutput: undefined });

// speaker control, ignored if unsupported
MediaDevices.setSpeaker(speakerId);

// the speaker id which all source have been set with
console.log(MediaDevices.activeSpeaker);

// autosync to activeSpeaker
MediaDevices.syncSpeaker(videoElm);

// remove autosync
MediaDevices.desyncSpeaker(videoElm);

// get some device
MediaDevices.getByLabel("Label");

MediaPipe

Consists of three parts, an input stream by getUserMedia, a optional series of MediaEffects and an optional html element sink. Each part can be altered independently and if possible actions will take place immediately.

import { MediaPipe } from "@auvious/media-tools";

const pipe = new MediaPipe();

// emitted when a new media stream or sub-tracks are set/changed to an audio or video sink
pipe.events.on("output", console.log);

// getUserMedia wrapper, which also makes sure preferred devices are selected if they exist
// can be called multiple times if needed
// constraints will be merged, since these are merged from left to right the first one acts as base
// in this case no audio will be requested even if audio is defined in second constraint
pipe.input.setInput([{ video: true }, MediaDevices.constraints], true);

// get the device ids actually selected if overriden
console.log(pipe.activeDevices.audioinput;

// this will emit 'output' event if input stream exist
pipe.sink.setSink(videoElement);
pipe.sink.unsetSink();

// will connect and apply filter, but you still have to call 'play' on it or whatever
// may also emit 'output' event if sink is set
pipe.addEffect(backgroundFilter);

// can be removed, and later reinserted if needed
pipe.removeEffect(backgroundFilter);

// at the end reset pipe
pipe.reset();