Skip to main content

Performance — Defaults & Safe Overrides

cpp.js picks production-grade defaults for every Emscripten and CMake flag. Most apps should change nothing. This page lists every default cpp.js sets, marks each as "safe to override" or "don't touch", and shows when to reach for a tweak.

The rule: if your build runs and your app works, the defaults are fine. Only touch performance flags after measuring.

Defaults reference

Emscripten flags (always-on)

FlagValuePurposeSafe to override?
-O3(release)Max optimization🔒 Don't override unless debugging a codegen issue
-O0(debug)No optimization✅ Already debug — fine
-msimd128wasmSIMD128 instruction set🔒 Already optimal
-sMEMORY64=1wasm64 only64-bit memory🔒 Set by target.arch, not flag override
-pthread + -sPTHREAD_POOL_SIZE=Math.min(navigator.hardwareConcurrency || 1, 2)mt onlyThread pool capped at 2 workers (1 if hardwareConcurrency is unavailable)PTHREAD_POOL_SIZE is tunable (see below)
-sPTHREAD_POOL_SIZE_STRICT=2mt onlyAbort if more pthreads requested than pool (no dynamic growth)⚠️ Drop to 1 (warn + grow) or 0 (silent grow) only if your code spawns unbounded threads
-lembindalwaysEmbind binding lib🔒 Required
-Wl,--whole-archivealwaysLink all objects🔒 Required for static lib symbol retention
-fwasm-exceptionsalwaysC++ exceptions via Wasm EH🔒 Required for proper throw semantics
-sWASM_BIGINT=1alwaysBigInt for i64🔒 Required for modern browsers
-sWASM=1alwaysOutput wasm (not asm.js)🔒 Don't touch
-sMODULARIZE=1alwaysES module wrapper🔒 cpp.js bundling depends on this
-sDYNAMIC_EXECUTION=0alwaysDisable eval / new Function🔒 Required for CSP-strict environments
-sRESERVED_FUNCTION_POINTERS=200alwaysFunction table size⚠️ Increase if "Cannot enlarge function table"
-sALLOW_MEMORY_GROWTH=1alwaysHeap can grow at runtime🔒 Don't disable
-sFORCE_FILESYSTEM=1browser, nodeAlways include FS🔒 cpp.js fs adapters depend on this
-sWASMFSbrowser, nodeNew filesystem backend🔒 OPFS depends on this
-sEXPORT_NAME=Module2alwaysJS namespace name🔒 cpp.js bundling depends on this

Per-runtimeEnv flags

runtimeEnv-sENVIRONMENT-sEXPORTED_RUNTIME_METHODS
browserweb,webview,worker["FS", "ENV"]
edgeweb["ENV"]
nodenode["FS", "ENV"]

CMake flags

FlagValuePurposeSafe to override?
-DPROJECT_NAMEgeneral.nameInternal🔒 Don't change
-DPROJECT_TARGET_*platform/arch/runtime/buildTypeRouting🔒 Don't change
-DBUILD_TYPE=STATICwasm, iosStatic libs🔒 Required by emcc / iOS frameworks
-DBUILD_TYPE=SHAREDandroidShared libs🔒 Required by Android dynamic loading
-DBUILD_SHARED_LIBS=OFFwasm, iosInverse of above🔒 Don't override
-DCMAKE_TOOLCHAIN_FILEper-platformToolchain pin🔒 Don't override
-DANDROID_PLATFORM=android-33androidNDK API level⚠️ Lower for older devices
-DCMAKE_OSX_DEPLOYMENT_TARGET=13.0iosiOS minimum⚠️ Lower for older iOS

System defaults

VariableDefaultSource
Android NDK27.3.13750724Docker image pin
Android API33CMake flag
iOS deployment13.0CMake flag
Bitcodeembedded (release) / marker (debug)iOS only
Emscripten cache~/.cppjs/emscripten/Docker volume

What's safe to override

INITIAL_MEMORY (default: 16MB Wasm default)

If you allocate large objects on startup (loading a model, opening a large geo dataset), the runtime grows memory dynamically — but you'll see growth pauses. Pre-allocating reduces pauses:

// cppjs.config.js
targetSpecs: [{
platform: 'wasm',
specs: { emccFlags: ['-sINITIAL_MEMORY=64MB'] },
}]

Sweet spot: pre-allocate ~2× your steady-state usage. Going higher just delays startup.

MAXIMUM_MEMORY (default: ~2GB on wasm32)

Browser cap is ~4GB on wasm32. If you genuinely need more (large geospatial / scientific datasets), use wasm64 target instead:

// cppjs.config.js
target: { arch: 'wasm64' }

Don't try to push wasm32 past 4GB — Wasm spec doesn't allow it.

PTHREAD_POOL_SIZE (default: Math.min(navigator.hardwareConcurrency || 1, 2))

The default expression is evaluated by Emscripten at module load (not baked at build time):

  • Caps the pool at 2 worker threads, even on 16-core devices.
  • Falls back to 1 if navigator.hardwareConcurrency is unavailable (older browsers, Node, edge runtimes).

Why the cap? Each pthread is a Web Worker — non-trivial memory footprint, startup latency, and WASM heap duplication. For typical workloads two workers (main + one) covers parallelism; going past 2 trades memory and load time for diminishing returns. Bump only when you've measured CPU-bound parallelism that scales further.

Paired with -sPTHREAD_POOL_SIZE_STRICT=2: if your code requests more pthreads than the pool, the runtime aborts. No dynamic growth, no main-thread deadlock risk. If you bump the pool, also relax strictness or keep your spawn count under the new cap.

Override scenarios:

  • CPU-bound parallelism that benefits from more workers (tile-by-tile image processing, parallel decoding) — bump the pool and relax strict mode:
    targetSpecs: [{
    platform: 'wasm', runtime: 'mt',
    specs: { emccFlags: ['-sPTHREAD_POOL_SIZE=8', '-sPTHREAD_POOL_SIZE_STRICT=1'] },
    }]
  • Want even less memory pressure — pin to one worker:
    targetSpecs: [{
    platform: 'wasm', runtime: 'mt',
    specs: { emccFlags: ['-sPTHREAD_POOL_SIZE=1'] },
    }]

Past hardwareConcurrency real cores, context-switching costs dominate anyway.

RESERVED_FUNCTION_POINTERS (default: 200)

If you see Cannot enlarge function table at runtime, bump this:

targetSpecs: [{ specs: { emccFlags: ['-sRESERVED_FUNCTION_POINTERS=1024'] } }]

Most apps never hit this. Function pointers are used by virtual methods, std::function captures, and JS callbacks into C++.

Android API level

Default android-33 (Android 13). Lower if you support older devices:

targetSpecs: [{
platform: 'android',
specs: { cmake: ['-DANDROID_PLATFORM=android-26'] }, // Android 8.0
}]

Don't go below 26 unless you absolutely have to — older NDK lacks key APIs (e.g. aligned_alloc, modern <filesystem>).

iOS deployment target

Default 13.0. Lower if you support older iOS:

targetSpecs: [{
platform: 'ios',
specs: { cmake: ['-DCMAKE_OSX_DEPLOYMENT_TARGET=12.0'] },
}]

Don't go below 12.0 — older iOS lacks the C++17 standard library features cpp.js auto-generated code uses.

JSPI (experimental, Chrome-only)

Lets C++ code synchronously await JS promises. Use only when you have a specific cross-boundary async pattern (background fetching mid-C++):

targetSpecs: [{
platform: 'wasm',
specs: { emccFlags: ['-sJSPI'] },
}]

Cost: larger Wasm binary (~10-20% bigger), slower call boundary. Only enable if you measure improvement. Pair with the _JSPI naming convention — see C++ binding rules.

What NOT to override

-O3 (release)

Always use -O3 in release. Don't switch to -O2 or -Os thinking you'll get a smaller binary — -O3 produces faster and often smaller output for typical C++ workloads.

-fwasm-exceptions

Required. Without it, C++ exceptions either silently abort or use the slower legacy emulation.

-sFORCE_FILESYSTEM=1, -sWASMFS

Required. cpp.js's fs adapters (browser-fs, node-fs) depend on these. Disabling breaks m.FS.*.

-sMODULARIZE=1, -sEXPORT_NAME=Module2

Required. cpp.js's runtime entry assumes the modular wrapper with this exact export name.

-sDYNAMIC_EXECUTION=0

Disabling re-enables eval / new Function inside the Wasm runtime. Breaks CSP-strict deployments.

-msimd128

Already optimal for Wasm. Removing it removes a free 2-4× speedup on supported workloads.

Common "performance" mistakes

"I'll switch to -Os for smaller bundle"

-O3 + bundler-side dead code elimination already produces smaller binaries than -Os for typical apps. Measure before assuming.

"I'll disable ALLOW_MEMORY_GROWTH for predictable allocation"

You'll just hit "out of memory" in the first user interaction that needs more than INITIAL_MEMORY. Memory growth is cheap; OOM crashes aren't.

"I'll disable WASMFS since I don't need files"

cpp.js's adapter layer assumes m.FS exists. Disabling breaks even simple operations like reading a file you bundled into the .data preload.

"I'll remove -fwasm-exceptions since my code doesn't throw"

Your C++ might not throw, but std::vector / std::string / std::map can. Removing exception support causes them to abort instead of throwing — silent crashes.

"I'll bump PTHREAD_POOL_SIZE to 32 for max parallelism"

Each pthread is a Web Worker with its own WASM instance — bumping pool size to 32 inflates memory and module startup time without a matching speedup. The default caps at 2 deliberately: enough to parallelize the hot path, cheap to spin up. Bump only when measurement shows your workload scales (e.g. embarrassingly parallel image / geo / crypto loops), and pair the bump with -sPTHREAD_POOL_SIZE_STRICT=1 so spawning more than the new cap doesn't trip the strict abort.

Profiling

Before reaching for any override, measure:

  1. Browser DevTools Performance tab — timing breakdown of JS / Wasm calls.
  2. Wasm-side printf — quick timestamps with console.debug (cpp.js's print hook).
  3. Memory tab — heap profile to see allocation patterns.
  4. Lighthouse / PageSpeed — Wasm bundle download is part of FCP / LCP.

Don't optimize speculatively. The defaults handle 99% of workloads.

See also