7
0
forked from pool/wasmedge

7 Commits

Author SHA256 Message Date
84fff40a89 Accepting request 1187712 from devel:languages:javascript
OBS-URL: https://build.opensuse.org/request/show/1187712
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/wasmedge?expand=0&rev=3
2024-07-19 13:25:05 +00:00
8a3e559fe1 - Add fmt11.patch to resolve FTBFS
OBS-URL: https://build.opensuse.org/package/show/devel:languages:javascript/wasmedge?expand=0&rev=6
2024-07-16 09:08:44 +00:00
34887b2f91 Accepting request 1132089 from devel:languages:javascript
OBS-URL: https://build.opensuse.org/request/show/1132089
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/wasmedge?expand=0&rev=2
2023-12-09 21:49:10 +00:00
72856eebfb Accepting request 1128890 from home:dirkmueller:Factory
- update to 0.13.5:
  * [Component] share loading entry for component and module
    (#2945)
  * Initial support for the component model proposal.
  * This PR allows WasmEdge to recognize the component and module
    format.
  * Provide options for enabling OpenBLAS, Metal, and cuBLAS.
  * Bump llama.cpp to b1383
  * Build thirdparty/ggml only when the ggml backend is enabled.
  * Enable the ggml plugin on the macOS platform.
  * Introduce `AUTO` detection. Wasm application will no longer
    need to specify the hardware spec (e.g., CPU or GPU). It will
    auto-detect by the runtime.
  * Unified the preload options with case-insensitive matching
  * Introduce `metadata` for setting the ggml options.
  * The following options are supported:
  * `enable-log`: `true` to enable logging. (default: `false`)
  * `stream-stdout`: `true` to print the inferred tokens in the
    streaming mode to standard output. (default: `false`)
  * `ctx-size`: Set the context size the same as the `--ctx-size`
    parameter in llama.cpp. (default: `512`)
  * `n-predict`: Set the number of tokens to predict, the same as
    the `--n-predict` parameter in llama.cpp. (default: `512`)
  * `n-gpu-layers`: Set the number of layers to store in VRAM,
    the same as the `--n-gpu-layers` parameter in llama.cpp.
    (default: `0`)
  * `reverse-prompt`: Set the token pattern at which you want to
    halt the generation. Similar to the `--reverse-prompt`
    parameter in llama.cpp. (default: `""`)
  * `batch-size`: Set the number of batch sizes for prompt

OBS-URL: https://build.opensuse.org/request/show/1128890
OBS-URL: https://build.opensuse.org/package/show/devel:languages:javascript/wasmedge?expand=0&rev=4
2023-12-08 12:48:21 +00:00
723c66a36f Accepting request 1105474 from devel:languages:javascript
Add WasmEdge

OBS-URL: https://build.opensuse.org/request/show/1105474
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/wasmedge?expand=0&rev=1
2023-08-28 15:12:01 +00:00
5a10204572 Accepting request 1105473 from home:avicenzi:wasm
cleanup

OBS-URL: https://build.opensuse.org/request/show/1105473
OBS-URL: https://build.opensuse.org/package/show/devel:languages:javascript/wasmedge?expand=0&rev=2
2023-08-23 11:35:54 +00:00
933b1317e2 Accepting request 1105212 from home:avicenzi:wasm
Add WasmEdge

OBS-URL: https://build.opensuse.org/request/show/1105212
OBS-URL: https://build.opensuse.org/package/show/devel:languages:javascript/wasmedge?expand=0&rev=1
2023-08-22 19:01:55 +00:00