I recently work in the field of front-end data visualization, and the need for some monitoring of long-running front-end pages comes up. In the past, my solution was to record through some existing platform on my personal PC via browser, or an earlier approach was to record through some screen recording tools.
In such an approach, the following problems were often encountered.
So, based on the above needs, we need to achieve the following requirements.
getDisplayMedia
for recordinggetDisplayMedia
is limited by the browser's protocol. This api is only available when the access protocol is https, and the recording of audio depends on other api.getDisplayMedia
has little room for optimization when recording multiple pages concurrently, and the most fatal problem is that the performance overhead of the recording process is borne by the browser. This means that if the page itself is more performance sensitive, it is basically impossible to record the page running properly using this api.node-xvfb
has some problems, the virtual desktops created, seem to share the same stream buffer, in the case of concurrent recording, there will be a situation of preemption, resulting in accelerated video content, so the need to encapsulate a new node call xvfb1 | import * as process from 'child_process'; |
Load balancing during concurrent server recording. This feature is to solve the problem of high server CPU load when recording video encoding concurrently. So to maximize the number of concurrent recordings, I record the number of tasks being and will be performed by each server, mark this number as the weight of the service, and when a new recording task is created, first check the weight of the current server, then create the recording task on the server with the lowest weight, and lower the weight when the recording is completed and the task is manually terminated. 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33import { CronJob } from 'cron';
interface CacheType {
[key: string]: CronJob;
}
class CronCache {
private cache: CacheType = {};
private cacheCount = 0;
setCache = (key: string, value: CronJob) => {
this.cache[key] = value;
this.cacheCount++;
return;
};
getCache = (key: string) => {
return this.cache[key];
};
deleteCache = (key: string) => {
if (this.cache[key]) {
delete this.cache[key];
}
this.cacheCount = this.cacheCount > 0 ? this.cacheCount - 1 : 0;
};
getCacheCount = () => this.cacheCount;
getCacheMap = () => this.cache;
}
export default new CronCache();
When starting puppeteer, you need to provide parameters
1 | const browser = await puppeteer.launch({ |
The api call causes chrome to pop up an interactive window to choose which specific web page to record. Closing this window requires the following parameters to be enabled when starting puppeteer
1 | '--enable-usermedia-screen-capturing', |
To execute the recording, you need to inject the function via puppeteer page.exposeFunction
.
Q: Why do I need to introduce xvfb?
A: In the tried and tested solution, getDisplayMedia requires the runtime environment to provide a desktop environment. In the current solution, it is necessary to push the video stream from xvfb directly into ffmpeg
Q: Why are there certain memory requirements?
A: To provide the minimum running memory for chrome
https://github.com/sadofriod/time-recorder