Electron апп бичих үед зарим native-тай холбоотой асуудлуудын талаарх тэмдэглэл
Record an Electron + DLL / SO (node-ffi) + SQLite 3 project practice Time:2019-9-14 background
This is a full stack project, backend usenode。 The project needs to provide two versions of B-side and C-side:
B-side requirements support multi-instance environments C-terminal requires cross-platform offline independent operation
Some of the complex data processing functions in the project areC languageCompiledDynamic Link Library (DLL)(In Linux, it’s calledShared LibraryAbbreviationSOThe following is a general termDLL), andpythonThe encapsulated interface ishttpForm provides service. In the C-side version, due to the need to meet the requirements of offline independent operation,pythonServices are packagedExecutable Procedure , nodeCalled by executing the command line.
According to the above requirements, we have made the following technical selection. Technology selection
nodeEdition
The LTS version was selected at that time.10.15.3However, in order to maintain the unification with the built-in node version in Electron, it was modified to10.11.0。
Although 10.15.3 is similar to 10.11.0, it does have some different features. for examplefs.mkdir10.15.3 SupportrecursiveOptions, but 10.11.0 does not support, resulting in unexpected results when migrating to Electron. Therefore, if you want to develop both B and C terminals, in order to facilitate migration, it is better to determine the specific version of node from the beginning. data base
Due to the need to support C-sideOffline independent operationRequirements, database selectionsqlite 。
sqliteIs a single file database, NPM packagesqlite3It is a layer of encapsulation for SQLite3 engine. A database file and an NPM package containing the database engine make it possible to package the C-terminal into an offline independent running program.
DLLLoading and calling of
staynodeIn DLL invocation, the following two NPM packages are used: ffiImplementationnodeLoad and call DLL refProvide powerful memory pointer operations
C-End Application Construction and Packaging
ElectronDeveloped by Github, it uses HTML, CSS and JavaScript to build an open source library for cross-platform desktop applications. Electron achieves this goal by merging Chromium and Node.js into the same runtime environment and packaging them into applications under Mac, Windows and Linux systems.
Electron’s friendliness to front-end developers and cross-platform support made us decide to use Electron to package the B-end version once and reuse the B-end code to the greatest extent. It is mainly used for these dependencies: electron@4.2.0 : 4.xThe built-in node version of the version of electron is10.xIt matches the node version used on the B side electron-builderFor electronic packaging
Development Environment Construction and Problem Investigation
BasisDLLOfdigitDetermining the development environment
DLLOfdigitIt has a great impact on the development environment. Taking this project as an example, only 32-bit version is provided on a DLL windows platform, which leads us to choose 32-bit nodes when developing. Fortunately, there is no need for a 32-bit system.
Recommended herenvmManaging the node version, when you switch the node version, is similarffi ref sqlite3 node-sassSuch native modules need to be re-installed, with the help ofnode-gypOr similar tools for recompilation.
Reference: native module on node.js
installref ffi sqlite3When native modules depend on each other, they need helpnode-gypTo compile for the current system platform. thereforenode-gypPre-installation conditions must be met
Reference: github: node-gyp# Installation
Following are the practical steps in this project
Linux installpython v2.7 installmake
installgcc gcc-c++ The GCC version needs to be consistent with the GCC version depending on the DLL used, and the gcc-c++ version needs to be consistent with the GCC version.
Windows
npm install --global --production windows-build-tools windows-build-toolsCan help us easily configure the environment needed to compile node native modules The parameters are also used in this project.--vs2015VS2017 was installed by default. Some problems were encountered at first, and then the process of changing to VS2015 was smoother. This option can be chosen according to the situation. npm config set msvs_version 2015If VS2017 is installed, the value is set to2017 Installation path of NPM config set python python 2.7
Electron Development Environment
The Electron development environment is consistent with point 2, but the difference is that we usenodeIt’s Electron’s built-in node, not the system installed node. Therefore, the node native module needs to be recompiled for the Electron environment.
This step can be done with NPM packageselectron-rebuildorelectron-buildercomplete Electronic-rebuild: Electronic Documents: Using Node Native Modules Electronic-builder: Electronic-builder: Quick Setup Guide
This project uses the second method:
Installation dependencyelectron-builder Add to"postinstall": "electron-builder install-app-deps"To NPM scripts, this will automatically compile native modules for us each time dependencies are installed
Electron Version Selection Extensions:
Electron Packaging Node Program: Problems caused by inconsistent values of NODE_MODULE_VERSION. (1) Electron Packaging Node Program: How to Get Abi of Electron and Node and Guide Abi to Get Version (2) Node – NODE_MODULE_VERSION (ABI) Version Control Table
Installation dependencies (some of these problems may also occur in Electron packaging)
npm install
Dependency Recommendation for Installation Using NPM instead of cnpm When installing a Linux environment, if the current user isrootThen you need to add parameters--unsafe-permOtherwise, you will encounterEACCES: permission deniedError of insufficient equal privilege If you encounter V140 related error, you need to installV140 ToolkitOne way to do this is to install it through Visual Studio
ifMSBuildTimes Error InformationMicrosoft.Cpp.Default.propsRelevant information, and found that the address was not correct
Attempts can be made to set environment variables:
set VCTargetsPath=C:\Program Files(x86)\MSBuild\Microsoft.Cpp\v4.0\V140
This path is generally the same, but it may also be different, according to the actual situation set.
Reference: Stack Overflow: Why does MSBuild look in C: for Microsoft. Cpp. Default. props instead of c: Program Files (x86) MSBuild? (error MSB4019) rcedit.exe failed – unable to commit changes
DLLUse
Compared with environmental configuration,DLLIt’s easier to use.ffiCombinationrefFor basic use, refer to the official FFI example ref official example. Common problem
DLL depends on other DLLs
This problem is quite common. We have tried two solutions.
Useffi.DynamicLibraryIntroducing dependency
let { RTLD_NOW, RTLD_GLOBAL } = ffi.DynamicLibrary.FLAGS; // More than one executes ffi. DynamicLibrary multiple times to introduce more than one ffi.DynamicLibrary( '/path/to/.dll/or/.so', RTLD_NOW | RTLD_GLOBAL );
Make dependencies globally accessible, as detailed below The encounter error code is126Winerror
This means that a DLL cannot be found, either because of a path error or because its dependencies are not introduced or cannot be accessed by the program in the global path. When there are differences between development and deployment environments, it is easy to encounter this problem.
So how can we determine which dependencies are missing? Linux provideslddSupports us to view the DLL needed to run the program. If we can’t find a dependency, its corresponding address will benot found。 The CMD that comes with Windows does not support this command, but tools like Git Bash integrate this function for us. In addition, it can also be used under Windows.Dependency WalkerAnd other tools to obtain dependencies. Want to merge multiple libraries onto an object
ffi.LibraryThe third parameter supports passing in an object. If the object is passed in, FFI will add the library’s new methods to the object, the same name will be overwritten, and eventually return to the object. If the parameter is not passed, a new object will be returned. This can be done, but generally there is no such requirement.
UseffiRead complex data structures
For a simple example, read an array of strings
function readStringArray (buffer, total) { let arr = []; for (let i = 0; i < total; i++) { arr.push(ref.get(buffer, ref.sizeof.pointer * i, ref.types.CString)); } return arr; }
Reference: Complex data structures with node-ffi
Make the DLL or its dependencies globally accessible Windows Environment
Shared directory approach
Win 32 Win 64 32-bit DLL C:/Windows/System32 C:/Windows/SysWOW64 64-bit DLL C:/Windows/System32
Depending on the number of DLL bits and the number of operating system bits, just put the DLL in one of the above directories.
Environmental variable PATH
As we all know, PATH environment variables under Windows are amazing, and DLL is no exception. By adding the absolute path of the directory where the DLL resides to the PATH environment variable, global access to the DLL can be achieved. Inside the node, you can dynamically set it up as follows:
process.env.PATH += `${path.delimiter}${xxx}`;
Linux environment
Mainly by orderldconfig 。 ldconfigSO is a SO management command that enables SO to be shared by the system.ldconfigSearch SO according to certain rules and create links and caches for program use.
The following is the search scope:
/libCatalog /usr/libCatalog /etc/ld.so.confDirectories declared in the document
Usually,/etc/ld.so.confThere will be a line in the file as follows:
include ld.so.conf.d/*.conf
Therefore, the directory/etc/ld.so.conf.dLi Yi.confThe directories declared in the closing documents are also searched for global variableLD_LIBRARY_PATHThe directory set in
Therefore, put SO in the directory mentioned above and execute itldconfigSO sharing can be realized. We can modify it./etc/ld.so.confDocuments, additions/etc/ld.so.conf.d/*.confFile or modify global variablesLD_LIBRARY_PATH 。
Reference resources:
Linux Programmer’s Manual-LDCONFIG(8) [Linux Notes] ldconfig, LDD Difference between lib, lib32, lib64, libx32, and libexec
A Linux boot script cooperating with ldconfig application practice
The file parsing part of the project relies on many DLLs, some of which have an impact on the graphical interface, causing users to display a black screen when they log in again.
The DLL dependency of file parsing is that node writes a file to the/etc/ld.so.conf.dNext, it declares the path where the dependency is located, and Linux loads automatically when it boots. To solve the problem of black screen, it is necessary to automatically remove the dependency declaration when booting up, so as to avoid users being unable to access the system desktop.
So here we need to use boot script to do some processing for us when boot up. The examples are as follows:
new filedelete-ldconfig.shThe contents are as follows:
- !/bin/bash
- chkconfig: 5 90 10
- description: test
rm -f /etc/ld.so.conf.d/test.conf ldconfig
Execute the following commands:
cp ./delete-ldconfig.sh /etc/rc.d/init.d cd /etc/rc.d/init.d/ chmod +x delete-ldconfig.sh chkconfig --add delete-ldconfig.sh chkconfig delete-ldconfig.sh on
Reference resources:
[centos7] Add boot start service/script The Meaning of the Seven Running Levels of CentOS System
Electron Main Process Busy Blocking Rendering Process Problem
C-side file parsing test, using a node_modules directory packaged into a compressed package, although not very large, but a lot of small files. When calling DLL parsing, it takes about 30 minutes, and subsequent program processing takes a long time. In the meantime, the C-end application interface loses its response.
After query, the original page rendering in chromium, UI process and main process need to continue sync IPC, if the main process is busy at this time, UI process will block in IPC.
Therefore, if you don’t want the rendering process to be blocked, you need to reduce the burden on the main process. as
Interrupt insertion of asynchronous processing between a large number of synchronous codes temporarily surrenders execution rights Using multiple processes
Reference: The problem of UI carton caused by blocking the main process of Electron Interrupt synchronization code
In the initial implementation, the result processing after file parsing is a CPU-intensive task, using a for loop without any asynchronous operation.
Use the following code to simulate the scenario:
(async () => {
setInterval(() => { console.log('====='); }, 60); while (true) { // Some Processing }
})();
In order to solve this blocking problem, we can carry out the following modifications:
(async () => {
setInterval(() => { console.log('====='); }, 60); while (true) { await new Promise(resolve => { setImmediate(() => { // Some Processing resolve(); }); }); }
})();
Using multiple processes
There are many choices to implement multi-process in Electron, such as Web Workers, Node’s child_process module, cluster module, worker_threads module, etc.
Because the native modules installed in the Electron project are recompiled, and when the application runs, there will be differences in environment variables, resulting in some system programs can not be found. So we can’t use it directly.child_process.execStart our child process in a similar way.
In this practice, after many attempts, we finally decided to adopt the following ways:
// main process const bkWorker = child_process.spawn(process.execPath /* 1 */, ['./app.js'], {
stdio: [0, 1, 2, 'ipc'], /* 2 */ cwd: __dirname, /* 3 */ env: process.env /* 4 */
}); bkWorker.on('message', (message) => {
// ...
});
// sub process process.send(/* ... */); /* 5 */
Explain:
Specify the path of the program to be executed. In the main process,process.execPathRepresents the electron program path Start an IPC channel between parent and child processes to facilitate our useprocess.sendCommunication Setting the current working directory of the child process Synchronizing environment variables from parent process to child process
Reference: node document – child_process.spawn Research on Unexpected Exit and Restart Processing of Spawn’s Subprocesses Main process index.js
const { spawn } = require('child_process');
let worker;
function serve () {
worker = spawn(process.execPath, ['./test1.js'], { stdio: [0, 1, 2, 'ipc'], cwd: __dirname, env: process.env }); worker.on('message', (...args) => { console.log('message', ...args); }); worker.on('error', (...args) => { console.log('error', ...args); }); worker.on('exit', (code, signal) => { console.log('exit', code, signal); }); worker.on('disconnect', () => { console.log('disconnect'); }); // When a basic subprocess exits, it triggers worker. on ('close') // So you can do something like restart child processes here. worker.on('close', (code, signal) => { console.log('close', code, signal); // serve(); });
// The disconnect exit close event of the worker is triggered successively with the exit parameter null SIGTERM // setTimeout(() => { // worker.kill(); // }, 2000);
}
// process. exit: The main process will exit immediately without affecting the child process, which needs to be dealt with separately. // process. abort: It will not affect the sub-process. To exit the sub-process, it needs to be handled separately. // throw Error: It does not affect the child process. To exit the child process, it needs to be handled separately. // setTimeout(() => { // // process.exit(); // // process.abort(); // // throw new Error('1231'); // }, 10000);
setInterval(() => {
console.log(process.pid, '===');
}, 1000);
serve();
Subprocess test1.js
process.on('uncaughtException', (error) => {
console.log('worker uncaughtException', error); process.send({ type: 'error', msg: error });
});
process.send('connected');
setInterval(() => {
console.log(process.pid, '---');
}, 1000);
// The worker. on ('disconnect') will be triggered, and neither master nor child processes will exit, but the connection is interrupted, process. send cannot be used, and errors will be reported. // setTimeout(() => { // process.disconnect(); // // process.send('hello?'); // }, 3000);
// process. abort: The disconnect exit close event of the worker is triggered successively. The exit and close parameters are null SIGABRT. // process. exit: The disconnect exit close event of the worker is triggered successively. The exit and close parameters are 0 null. // setTimeout(() => { // // process.abort(); // // process.exit(); // }, 10000);
// Processing with process. on ('uncaughtException') // setTimeout(() =>{ // throw new Error('123'); // }, 1500);
Execution and Debugging Skills of Packaged Programs Running a program from the command line
Enter the directory where the packaged program is located and execute the command assuming that the program is named “Test”./TestThe program can be executed, and all console output in the program will be printed in the terminal. We can debug with the help of output information. asar
When windows package opens asar, the packaged file resources are archived to an ASAR file. If we happen to need debugging, we may need to change the code and repackage, install, and run it. But there is actually a more convenient way.
ASAR provides command-line tools, through the command line, we can pack, list the file list in the archive file, decompress a single file, decompress the entire archive file and other functions.
Therefore, our post-packaging debugging process can be simplified as follows:
Close the application and unpack the packaged ASAR file Modify the code, then repackage the entire modified file and restart the application
Reference: npm-asar miscellaneous Installation of Linux Virtual Machine and Construction of Common Environment
Virtual Machine Installation: http://note.youdao.com/notesh…
Environment: http://note.youdao.com/notesh… VS Code Debugging Electron Profile
{
"version": "0.2.0", "configurations": [ { "name": "Electron", "type": "node", "request": "launch", "cwd": "${workspaceRoot}", "program": "${workspaceFolder}/server/index.js", "runtimeExecutable": "${workspaceRoot}/server/node_modules/.bin/electron", "windows": { "runtimeExecutable": "${workspaceRoot}/server/node_modules/.bin/electron.cmd" }, "args": [ "." ], "outputCapture": "std", "env": { "NODE_ENV": "development" } } ]
}
Using Hard Links to Manage File Resource Reference Relationships
There are two kinds of file links: hard links and symbolic links.
Hard links point directly to data, which increases the inode count of the file. There will only be one copy of the data. All links to the data are synchronized. In a file system, data with inode 0 is deleted. Symbolic links record the location of the file and do not increase the inode count of the file. Usually, the shortcut we use is symbolic links. If the target file is deleted, the links will not disappear, but the resources pointed to by this link can not be found again.
In the project, there is such a problem: the whole system is around the business processing of file resources, each module is in series, and the upstream output is the downstream input. However, the resources of each process need to be managed separately by users. The dependence of this module on one resource does not affect the deletion of other modules.
If we want to be independent of each other, the possible way is to save a document for each link. But in this way, a lot of hard disk space will be wasted and data synchronization will not be handled properly. So ultimately we choose to use hard links to achieve this part of the requirements.
Hard links are very simple to operate. There are the following main operations in node:
const fs = require('fs'); Fs.link (existingPath, newPath, callback); //Create hard links Fs.unlink (path, callback); //Delete hard links Fs.stat (path [, options], callback). nlink; // view inode number
Reference resources:
Linux File Link Hardlink and Symbolic Link Analysis and Research on Disk Space Occupied by Hard Link and Soft Link Node Document-File System
Postgre data export
Rm-f dlp.sql & & pg_dump-U username-d database name-f file name.sql-h server host-p port-s
Reference: pg_dump error handling Event correlation
Errors thrown in events are captured by the outer layer
const { EventEmitter } = require('events'); const event = new EventEmitter(); event.on('test', () => { // throw new Error('1'); try { throw new Error('2'); } catch (e) { event.emit('error', e); } }); event.on('error', (e) => { console.log(e); // Error: 2 }); try { event.emit('test'); } catch (e) { console.log(e); // Error: 1 }
Synchronized asynchronous correlation
Error handling in Promise
awaitIt’s crucial. No.await,tryUnable to capturecatchinthrowMistake
(async () => { try { let res = await new Promise(() => { throw new Error(2); }).catch(error => { if (error.message === '1') { return Promise.resolve('haha'); } else throw error; }); console.log(res); } catch (e) { debugger; } })();
Moving files across disk characters
function moveFileCrossDevice (source, target) {
return new Promise((resolve, reject) => { try { If (! Fs. existsSync (source)) reject (new BEKnownError ('source file does not exist'); let readStream = fs.createReadStream(source); let writeStream = fs.createWriteStream(target); readStream.on('end',function(){ fs.unlinkSync(source); resolve(); }); readStream.on('error', (error) => { reject(error); }); writeStream.on('error', (error) => { reject(error); }); readStream.pipe(writeStream); } catch (e) { reject(e); } });
}