treewide: fix typos (#479869)

This commit is contained in:
Michael Daniels 2026-01-24 21:36:44 +00:00 committed by GitHub
commit 006ecdbdeb
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
72 changed files with 194 additions and 194 deletions

View file

@ -10,18 +10,18 @@ Some architectural notes about key decisions and concepts in our workflows:
Thus they should be lowered to the minimum with `permissions: {}` in every workflow by default.
- By definition `pull_request_target` runs in the context of the **base** of the pull request.
This means, that the workflow files to run will be taken from the base branch, not the PR, and actions/checkout will not checkout the PR, but the base branch, by default.
This means that the workflow files to run will be taken from the base branch, not the PR, and actions/checkout will not checkout the PR, but the base branch, by default.
To protect our secrets, we need to make sure to **never execute code** from the pull request and always evaluate or build nix code from the pull request with the **sandbox enabled**.
- To test the pull request's contents, we checkout the "test merge commit".
This is a temporary commit that GitHub creates automatically as "what would happen, if this PR was merged into the base branch now?".
This is a temporary commit that GitHub creates automatically as "what would happen if this PR was merged into the base branch now?".
The checkout could be done via the virtual branch `refs/pull/<pr-number>/merge`, but doing so would cause failures when this virtual branch doesn't exist (anymore).
This can happen when the PR has conflicts, in which case the virtual branch is not created, or when the PR is getting merged while workflows are still running, in which case the branch won't exist anymore at the time of checkout.
Thus, we use the `prepare` job to check whether the PR is mergeable and the test merge commit exists and only then run the relevant jobs.
- Various workflows need to make comparisons against the base branch.
In this case, we checkout the parent of the "test merge commit" for best results.
Note, that this is not necessarily the same as the default commit that actions/checkout would use, which is also a commit from the base branch (see above), but might be older.
Note that this is not necessarily the same as the default commit that actions/checkout would use, which is also a commit from the base branch (see above), but might be older.
## Terminology

View file

@ -680,7 +680,7 @@ If you have any problems with formatting, please ping the [formatting team](http
{ buildInputs = if stdenv.hostPlatform.isDarwin then [ iconv ] else null; }
```
As an exception, an explicit conditional expression with null can be used when fixing a important bug without triggering a mass rebuild.
As an exception, an explicit conditional expression with null can be used when fixing an important bug without triggering a mass rebuild.
If this is done a follow up pull request _should_ be created to change the code to `lib.optional(s)`.
- Any style choices not covered here but that can be expressed as general rules should be left at the discretion of the authors of changes and _not_ commented in reviews.
@ -865,7 +865,7 @@ If someone approved and didn't merge a few days later, they most likely just for
Please see it as your responsibility to actively remind reviewers of your open PRs.
The easiest way to do so is to notify them via GitHub.
Github notifies people involved, whenever you add a comment or push to your PR or re-request their review.
GitHub notifies people involved, whenever you add a comment or push to your PR or re-request their review.
Doing any of that will get their attention again.
Everyone deserves proper attention, and yes, that includes you!
However, please be mindful that committers can sadly not always give everyone the attention they deserve.

View file

@ -70,7 +70,7 @@ For more information about contributing to the project, please visit the [contri
The infrastructure for NixOS and related projects is maintained by a nonprofit organization, the [NixOS Foundation](https://nixos.org/nixos/foundation.html).
To ensure the continuity and expansion of the NixOS infrastructure, we are looking for donations to our organization.
You can donate to the NixOS foundation through [SEPA bank transfers](https://nixos.org/donate.html) or by using Open Collective:
You can donate to the NixOS Foundation through [SEPA bank transfers](https://nixos.org/donate.html) or by using Open Collective:
<a href="https://opencollective.com/nixos#support"><img src="https://opencollective.com/nixos/tiers/supporter.svg?width=890" /></a>

View file

@ -24,7 +24,7 @@ The Nixpkgs merge bot empowers package maintainers by enabling them to merge PRs
It serves as a bridge for maintainers to quickly respond to user feedback, facilitating a more self-reliant approach.
Especially when considering there are roughly 20 maintainers for every committer, this bot is a game-changer.
Following [RFC 172] the merge bot was originally implemented as a [python webapp](https://github.com/NixOS/nixpkgs-merge-bot), which has now been integrated into [`ci/github-script/bot.js`](./github-script/bot.js) and [`ci/github-script/merge.js`](./github-script/merge.js).
Following [RFC 172], the merge bot was originally implemented as a [python webapp](https://github.com/NixOS/nixpkgs-merge-bot), which has now been integrated into [`ci/github-script/bot.js`](./github-script/bot.js) and [`ci/github-script/merge.js`](./github-script/merge.js).
### Using the merge bot

View file

@ -19,7 +19,7 @@ The following arguments can be used to fine-tune performance:
- `--max-jobs`: The maximum number of derivations to run at the same time.
Only each [supported system](../supportedSystems.json) gets a separate derivation, so it doesn't make sense to set this higher than that number.
- `--cores`: The number of cores to use for each job.
Recommended to set this to the amount of cores on your system divided by `--max-jobs`.
Recommended to set this to the number of cores on your system divided by `--max-jobs`.
- `--arg chunkSize`: The number of attributes that are evaluated simultaneously on a single core.
Lowering this decreases memory usage at the cost of increased evaluation time.
If this is too high, there won't be enough chunks to process them in parallel, and will also increase evaluation time.

View file

@ -111,7 +111,7 @@ This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/us
#### HTML
Inlining HTML is not allowed.
Parts of the documentation gets rendered to various non-HTML formats, such as man pages in the case of NixOS manual.
Parts of the documentation get rendered to various non-HTML formats, such as man pages in the case of NixOS manual.
#### Roles
@ -407,7 +407,7 @@ To define a referenceable figure use the following fencing:
:::
```
Defining figures through the `figure` fencing class adds them to a `List of Figures` after the `Table of Contents`.
Defining figures through the `figure` fencing class adds them to a `List of Figures` after the `Table of Contents`.
Though this is not shown in the rendered documentation on nixos.org.
#### Footnotes

View file

@ -4,7 +4,7 @@ The `nix-shell` command has popularized the concept of transient shell environme
<!--
We should try to document the product, not its development process in the Nixpkgs reference manual,
but *something* needs to be said to provide context for this library.
This is the most future proof sentence I could come up with while Nix itself does yet make use of this.
This is the most future proof sentence I could come up with while Nix itself does not yet make use of this.
Relevant is the current status of the devShell attribute "project": https://github.com/NixOS/nix/issues/7501
-->
However, `nix-shell` is not the only way to create such environments, and even `nix-shell` itself can indirectly benefit from this library.
@ -60,7 +60,7 @@ devShellTools.unstructuredDerivationInputEnv {
#}
```
Note that `args` is not included, because Nix does not added it to the builder process environment.
Note that `args` is not included, because Nix does not add it to the builder process environment.
:::

View file

@ -536,7 +536,7 @@ See [](#chap-pkgs-fetchers-caveats) for more details on how to work with the `ha
Returns a [fixed-output derivation](https://nixos.org/manual/nix/stable/glossary.html#gloss-fixed-output-derivation) which downloads an archive from a given URL and decompresses it.
Despite its name, `fetchzip` is not limited to `.zip` files but can also be used with [various compressed tarball formats](#tar-files) by default.
This can extended by specifying additional attributes, see [](#ex-fetchers-fetchzip-rar-archive) to understand how to do that.
This can be extended by specifying additional attributes, see [](#ex-fetchers-fetchzip-rar-archive) to understand how to do that.
### Inputs {#sec-pkgs-fetchers-fetchzip-inputs}
@ -765,7 +765,7 @@ Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `hash`
## `fetchgit` {#fetchgit}
Used with Git. Expects `url` to a Git repo, `rev` or `tag`, and `hash`. `rev` in this case can be full the git commit id (SHA1 hash), or use `tag` for a tag name like `refs/tags/v1.0`.
Used with Git. Expects `url` to a Git repo, `rev` or `tag`, and `hash`. `rev` in this case can be the full git commit id (SHA1 hash), or use `tag` for a tag name like `refs/tags/v1.0`.
If you want to fetch a tag you should pass the `tag` parameter instead of `rev` which has the same effect as setting `rev = "refs/tags"/${version}"`.
This is safer than just setting `rev = version` w.r.t. possible branch and tag name conflicts.
@ -799,7 +799,7 @@ Additionally, the following optional arguments can be given:
*`deepClone`* (Boolean)
: Clone the entire repository as opposing to just creating a shallow clone.
: Clone the entire repository as opposed to just creating a shallow clone.
This implies `leaveDotGit`.
*`fetchTags`* (Boolean)

View file

@ -8,7 +8,7 @@ Build helpers don't always support fixed-point arguments yet, as support in [`st
Developers can use the Nixpkgs library function [`lib.customisation.extendMkDerivation`](#function-library-lib.customisation.extendMkDerivation) to define a build helper supporting fixed-point arguments from an existing one with such support, with an attribute overlay similar to the one taken by [`<pkg>.overrideAttrs`](#sec-pkg-overrideAttrs).
Besides overriding, `lib.extendMkDerivation` also supports `excludeDrvArgNames` to optionally exclude some arguments in the input fixed-point arguments from passing down the base build helper (specified as `constructDrv`).
Besides overriding, `lib.extendMkDerivation` also supports `excludeDrvArgNames` to optionally exclude some arguments in the input fixed-point arguments from passing down to the base build helper (specified as `constructDrv`).
:::{.example #ex-build-helpers-extendMkDerivation}

View file

@ -73,7 +73,7 @@ Similarly, if you encounter errors similar to `Error_Protocol ("certificate has
A value of `null` means that `buildImage` will use the first image available in the repository.
:::{.note}
This must be used with `fromImageName`. Using only `fromImageTag` without `fromImageName` will make `buildImage` use the first image available in the repository
This must be used with `fromImageName`. Using only `fromImageTag` without `fromImageName` will make `buildImage` use the first image available in the repository.
:::
_Default value:_ `null`.
@ -1013,7 +1013,7 @@ Because of this, using this function requires the `kvm` device to be available,
A value of `null` means that `exportImage` will use the first image available in the repository.
:::{.note}
This must be used with `fromImageName`. Using only `fromImageTag` without `fromImageName` will make `exportImage` use the first image available in the repository
This must be used with `fromImageName`. Using only `fromImageTag` without `fromImageName` will make `exportImage` use the first image available in the repository.
:::
_Default value:_ `null`.
@ -1145,7 +1145,7 @@ $ file /nix/store/by3f40xvc4l6bkis74l0fj4zsy0djgkn-hello.tar.gz
/nix/store/by3f40xvc4l6bkis74l0fj4zsy0djgkn-hello.tar.gz: POSIX tar archive (GNU)
```
If the archive was actually compressed, the output of file would've mentioned that fact.
If the archive was actually compressed, the output of `file` would've mentioned that fact.
Because of this, it may be important to set a proper `name` attribute when using `exportImage` with other functions from `dockerTools`.
:::

View file

@ -8,7 +8,7 @@ This function can create images in two ways:
- using a virtual machine to create a full NixOS installation.
When testing early-boot or lifecycle parts of NixOS such as a bootloader or multiple generations, it is necessary to opt for a full NixOS system installation.
Whereas for many web servers, applications, it is possible to work with a Nix store only disk image and is faster to build.
Whereas for many web servers and applications, it is possible to work with a Nix store only disk image, which is faster to build.
NixOS tests also use this function when preparing the VM. The `cptofs` method is used when `virtualisation.useBootLoader` is false (the default). Otherwise the second method is used.
@ -39,7 +39,7 @@ Features are separated in various sections depending on if you opt for a Nix-sto
### On bit-to-bit reproducibility {#sec-make-disk-image-features-reproducibility}
Images are **NOT** deterministic, please do not hesitate to try to fix this, source of determinisms are (not exhaustive) :
Images are **NOT** deterministic. Please do not hesitate to try to fix this. Sources of non-determinism are (not exhaustive):
- bootloader installation has timestamps
- SQLite Nix store database contains registration times

View file

@ -5,8 +5,8 @@ It makes no assumptions about the container runner you choose to use to run the
The set of functions in `pkgs.ociTools` currently does not handle the [OCI image specification](https://github.com/opencontainers/image-spec).
At a high-level an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle.
At this point the OCI Runtime Bundle would be run by an OCI Runtime.
At a high level, an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle.
At this point, the OCI Runtime Bundle would be run by an OCI Runtime.
`pkgs.ociTools` provides utilities to create OCI Runtime bundles.
## buildContainer {#ssec-pkgs-ociTools-buildContainer}
@ -54,7 +54,7 @@ Note that no user namespace is created, which means that you won't be able to ru
`os` **DEPRECATED**
: Specifies the operating system on which the container filesystem is based on.
: Specifies the operating system on which the container filesystem is based.
If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties).
According to the linked specification, all possible values for `$GOOS` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `darwin` or `linux`.

View file

@ -6,7 +6,7 @@ For hermeticity, Nix derivations do not allow any state to be carried over betwe
However, we can tell Nix explicitly what the previous build state was, by representing that previous state as a derivation output. This allows the passed build state to be used for an incremental build.
To change a normal derivation to a checkpoint based build, these steps must be taken:
To change a normal derivation to a checkpoint-based build, these steps must be taken:
```nix
{
checkpointArtifacts = (pkgs.checkpointBuildTools.prepareCheckpointBuild pkgs.virtualbox);

View file

@ -14,11 +14,11 @@ Accepted arguments are:
- `executableName`
The name of the wrapper executable. Defaults to `pname` if set, or `name` otherwise.
- `targetPkgs`
Packages to be installed for the main host's architecture (i.e. x86_64 on x86_64 installations). Along with libraries binaries are also installed.
Packages to be installed for the main host's architecture (i.e. x86_64 on x86_64 installations). Along with libraries, binaries are also installed.
- `multiPkgs`
Packages to be installed for all architectures supported by a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are installed by default.
- `multiArch`
Whether to install 32bit multiPkgs into the FHSEnv in 64bit environments
Whether to install 32-bit multiPkgs into the FHSEnv in 64-bit environments
- `extraBuildCommands`
Additional commands to be executed for finalizing the directory structure.
- `extraBuildCommandsMulti`

View file

@ -1,6 +1,6 @@
# pkgs.makeSetupHook {#sec-pkgs.makeSetupHook}
`pkgs.makeSetupHook` is a build helper that produces hooks that go in to `nativeBuildInputs`
`pkgs.makeSetupHook` is a build helper that produces hooks that go into `nativeBuildInputs`
## Usage {#sec-pkgs.makeSetupHook-usage}

View file

@ -136,7 +136,7 @@ A set of functions that build a predefined set of minimal Linux distributions im
### Attributes {#vm-tools-diskImageFuns-attributes}
* `size` (optional, defaults to `4096`). The size of the image, in MiB.
* `extraPackages` (optional). A list names of additional packages from the distribution that should be included in the image.
* `extraPackages` (optional). A list of names of additional packages from the distribution that should be included in the image.
### Examples {#vm-tools-diskImageFuns-examples}

View file

@ -63,7 +63,7 @@ Note the moduleNames used in cmake find_package are case sensitive.
Check a packaged static site's links with the [`lychee` package](https://search.nixos.org/packages?show=lychee&type=packages&query=lychee).
You may use Nix to reproducibly build static websites, such as for software documentation.
Some packages will install documentation in their `out` or `doc` outputs, or maybe you have dedicated package where you've made your static site reproducible by running a generator, such as [Hugo](https://gohugo.io/) or [mdBook](https://rust-lang.github.io/mdBook/), in a derivation.
Some packages will install documentation in their `out` or `doc` outputs, or maybe you have a dedicated package where you've made your static site reproducible by running a generator, such as [Hugo](https://gohugo.io/) or [mdBook](https://rust-lang.github.io/mdBook/), in a derivation.
If you have a static site that can be built with Nix, you can use `lycheeLinkCheck` to check that the hyperlinks in your site are correct, and do so as part of your Nix workflow and CI.
@ -578,7 +578,7 @@ Use the derivation hash to invalidate the output via name, for testing.
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
Normally, fixed output derivations can and should be cached by their output hash only, but for testing we want to re-fetch everytime the fetcher changes.
Normally, fixed output derivations can and should be cached by their output hash only, but for testing we want to re-fetch every time the fetcher changes.
Changes to the fetcher become apparent in the drvPath, which is a hash of how to fetch, rather than a fixed store path.
By inserting this hash into the name, we can make sure to re-run the fetcher every time the fetcher changes.

View file

@ -8,7 +8,7 @@ Like [`stdenv.mkDerivation`](#sec-using-stdenv), each of these build helpers cre
The function `runCommandWith` returns a derivation built using the specified command(s), in a specified environment.
It is the underlying base function of all [`runCommand*` variants].
It is the underlying base function of all [`runCommand*` variants].
The general behavior is controlled via a single attribute set passed
as the first argument, and allows specifying `stdenv` freely.
@ -45,7 +45,7 @@ runCommandWith :: {
:::
`stdenv` (Derivation)
: The [standard environment](#chap-stdenv) to use, defaulting to `pkgs.stdenv`
: The [standard environment](#chap-stdenv) to use, defaulting to `pkgs.stdenv`.
`derivationArgs` (Attribute set)
: Additional arguments for [`mkDerivation`](#sec-using-stdenv).
@ -160,7 +160,7 @@ runCommandWith { inherit name derivationArgs; } buildCommand
## Writing text files {#trivial-builder-text-writing}
Nixpkgs provides the following functions for producing derivations which write text files or executable scripts into the Nix store.
They are useful for creating files from Nix expression, and are all implemented as convenience wrappers around `writeTextFile`.
They are useful for creating files from Nix expressions, and are all implemented as convenience wrappers around `writeTextFile`.
Each of these functions will cause a derivation to be produced.
When you coerce the result of each of these functions to a string with [string interpolation](https://nixos.org/manual/nix/stable/language/string-interpolation) or [`toString`](https://nixos.org/manual/nix/stable/language/builtins#builtins-toString), it will evaluate to the [store path](https://nixos.org/manual/nix/stable/store/store-path) of this derivation.
@ -682,7 +682,7 @@ writeTextFile {
## `concatTextFile`, `concatText`, `concatScript` {#trivial-builder-concatText}
These functions concatenate `files` to the Nix store in a single file. This is useful for configuration files structured in lines of text. `concatTextFile` takes an attribute set and expects two arguments, `name` and `files`. `name` corresponds to the name used in the Nix store path. `files` will be the files to be concatenated. You can also set `executable` to true to make this file have the executable bit set.
`concatText` and`concatScript` are simple wrappers over `concatTextFile`.
`concatText` and `concatScript` are simple wrappers over `concatTextFile`.
Here are a few examples:
```nix

View file

@ -2,6 +2,6 @@
* Make sure you have a [GitHub account](https://github.com/signup/free)
* Make sure there is no open issue on the topic
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and fill out the template
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and filling out the template
<!-- In the future this section could also include more detailed information on the issue templates -->

View file

@ -28,7 +28,7 @@ Packages, including the Nix packages collection, are distributed through
[channels](https://nixos.org/nix/manual/#sec-channels). The collection is
distributed for users of Nix on non-NixOS distributions through the channel
`nixpkgs-unstable`. Users of NixOS generally use one of the `nixos-*` channels,
e.g. `nixos-22.11`, which includes all packages and modules for the stable NixOS
e.g., `nixos-22.11`, which includes all packages and modules for the stable NixOS
22.11. Stable NixOS releases are generally only given
security updates. More up-to-date packages and modules are available via the
`nixos-unstable` channel.
@ -36,7 +36,7 @@ security updates. More up-to-date packages and modules are available via the
Both `nixos-unstable` and `nixpkgs-unstable` follow the `master` branch of the
Nixpkgs repository, although both do lag the `master` branch by generally
[a couple of days](https://status.nixos.org/). Updates to a channel are
distributed as soon as all tests for that channel pass, e.g.
distributed as soon as all tests for that channel pass, e.g.,
[this table](https://hydra.nixos.org/job/nixpkgs/trunk/unstable#tabs-constituents)
shows the status of tests for the `nixpkgs-unstable` channel.
@ -47,4 +47,4 @@ The binaries are made available via a [binary cache](https://cache.nixos.org).
The current Nix expressions of the channels are available in the
[Nixpkgs repository](https://github.com/NixOS/nixpkgs) in branches
that correspond to the channel names (e.g. `nixos-22.11-small`).
that correspond to the channel names (e.g., `nixos-22.11-small`).

View file

@ -16,7 +16,7 @@ It should have the following properties:
Non-goals are:
- Efficient:
If the abstraction proves itself worthwhile but too slow, it can be still be optimized further.
If the abstraction proves itself worthwhile but too slow, it can still be optimized further.
## Tests
@ -90,7 +90,7 @@ One of the following:
- `"regular"`, `"symlink"`, `"unknown"` or any other non-`"directory"` string:
A nested file with its file type.
These specific strings are chosen to be compatible with `builtins.readDir` for a simpler implementation.
Distinguishing between different file types is not strictly necessary for the functionality this library,
Distinguishing between different file types is not strictly necessary for the functionality of this library,
but it does allow nicer printing of file sets.
- `null`:
@ -127,7 +127,7 @@ Arguments:
### Empty file set without a base
There is a special representation for an empty file set without a base path.
This is used for return values that should be empty but when there's no base path that would makes sense.
This is used for return values that should be empty but when there's no base path that would make sense.
Arguments:
- Alternative: This could also be represented using `_internalBase = /.` and `_internalTree = null`.

View file

@ -169,7 +169,7 @@ See its [README](./scripts/README.md) for further information.
# nixpkgs-merge-bot
To streamline autoupdates, leverage the nixpkgs-merge-bot by commenting `@NixOS/nixpkgs-merge-bot merge` if the package resides in pkgs-by-name, the commenter is among the package maintainers, and the pull request author is @r-ryantm or a Nixpkgs committer.
To streamline autoupdates, leverage the nixpkgs-merge-bot by commenting `@NixOS/nixpkgs-merge-bot merge` if the package resides in `pkgs/by-name`, the commenter is among the package maintainers, and the pull request author is @r-ryantm or a Nixpkgs committer.
The bot ensures that all ofborg checks, except for darwin, are successfully completed before merging the pull request.
Should the checks still be underway, the bot patiently waits for ofborg to finish before attempting the merge again.

View file

@ -47,7 +47,7 @@ robustly than text search through `maintainer-list.nix`.
The maintainer is designated by a `selector` which must be one of:
- `handle` (default): the maintainer's attribute name in `lib.maintainers`;
- `email`, `name`, `github`, `githubId`, `matrix`, `name`:
- `email`, `name`, `github`, `githubId`, `matrix`:
attributes of the maintainer's object, matched exactly; see [`maintainer-list.nix`] for the fields' definition.
[`maintainer-list.nix`]: ../maintainer-list.nix

View file

@ -4,12 +4,12 @@ Currently `nixpkgs` builds most of its packages using bootstrap seed binaries (w
- `bootstrap-tools`: an archive with the compiler toolchain and other helper tools enough to build the rest of the `nixpkgs`.
- initial binaries needed to unpack `bootstrap-tools.*`.
On `linux` it's just `busybox`, on `darwin` and `freebsd` it is unpack.nar.xz which contains the binaries and script needed to unpack the tools.
On `linux` it's just `busybox`, on `darwin` and `freebsd` it is `unpack.nar.xz` which contains the binaries and script needed to unpack the tools.
These binaries can be executed directly from the store.
These are called "bootstrap files".
Bootstrap files should always be fetched from hydra and uploaded to `tarballs.nixos.org` to guarantee that all the binaries were built from the code committed into `nixpkgs` repository.
Bootstrap files should always be fetched from Hydra and uploaded to `tarballs.nixos.org` to guarantee that all the binaries were built from the code committed into `nixpkgs` repository.
The uploads to `tarballs.nixos.org` are done by `@NixOS/infra` team members who have S3 write access.
@ -93,7 +93,7 @@ To do that you will need the following:
2. Add your new target to `pkgs/stdenv/linux/make-bootstrap-tools-cross.nix`.
This will add a new hydra job to `nixpkgs:cross-trunk` jobset.
3. Wait for a hydra to build your bootstrap tarballs.
3. Wait for a Hydra to build your bootstrap tarballs.
4. Add your new target to `maintainers/scripts/bootstrap-files/refresh-tarballs.bash` around `CROSS_TARGETS=()`.
@ -103,15 +103,15 @@ To do that you will need the following:
There are two types of bootstrap files:
- natively built `stdenvBootstrapTools.build` hydra jobs in [`nixpkgs:trunk`](https://hydra.nixos.org/jobset/nixpkgs/trunk#tabs-jobs) jobset.
- natively built `stdenvBootstrapTools.build` Hydra jobs in [`nixpkgs:trunk`](https://hydra.nixos.org/jobset/nixpkgs/trunk#tabs-jobs) jobset.
Incomplete list of examples is:
* `aarch64-unknown-linux-musl.nix`
* `i686-unknown-linux-gnu.nix`
These are Tier 1 hydra platforms.
These are Tier 1 Hydra platforms.
- cross-built by `bootstrapTools.build` hydra jobs in [`nixpkgs:cross-trunk`](https://hydra.nixos.org/jobset/nixpkgs/cross-trunk#tabs-jobs) jobset.
- cross-built by `bootstrapTools.build` Hydra jobs in [`nixpkgs:cross-trunk`](https://hydra.nixos.org/jobset/nixpkgs/cross-trunk#tabs-jobs) jobset.
Incomplete list of examples is:
* `mips64el-unknown-linux-gnuabi64.nix`

View file

@ -20,7 +20,7 @@ See the [CONTRIBUTING.md](../CONTRIBUTING.md) document for more general informat
- [`pkgs-lib`](./pkgs-lib): Definitions for utilities that need packages but are not needed for packages
- [`test`](./test): Tests not directly associated with any specific packages
- [`by-name`](./by-name): Top-level packages organised by name ([docs](./by-name/README.md))
- All other directories loosely categorise top-level packages definitions, see [category hierarchy][categories]
- All other directories loosely categorise top-level package definitions, see [category hierarchy][categories]
## Quick Start to Adding a Package
@ -148,7 +148,7 @@ To add a package to Nixpkgs:
- All other [`meta`](https://nixos.org/manual/nixpkgs/stable/#chap-meta) attributes are optional, but its still a good idea to provide at least the `description`, `homepage` and [`license`](https://nixos.org/manual/nixpkgs/stable/#sec-meta-license).
- The exact syntax and semantics of the Nix expression language, including the built-in functions, are [Nix language reference](https://nixos.org/manual/nix/stable/language/).
- The exact syntax and semantics of the Nix expression language, including the built-in functions, can be found in the [Nix language reference](https://nixos.org/manual/nix/stable/language/).
5. To test whether the package builds, run the following command from the root of the nixpkgs source tree:
@ -437,7 +437,7 @@ Follow these guidelines:
- It _must_ be a valid identifier in Nix.
- If the `pname` starts with a digit, the attribute name _should_ be prefixed with an underscore.
Otherwise the attribute name _should not_ be prefixed with an underline.
Otherwise the attribute name _should not_ be prefixed with an underscore.
Example: The corresponding attribute name for `0ad` should be `_0ad`.
@ -460,7 +460,7 @@ Follow these guidelines:
## Versioning
[versioning]: #versioning
These are the guidelines the `version` attribute of a package:
These are the guidelines for the `version` attribute of a package:
- It _must_ start with a digit.
This is required for backwards-compatibility with [how `nix-env` parses derivation names](https://nix.dev/manual/nix/latest/command-ref/nix-env#selectors).
@ -487,7 +487,7 @@ See also [`pkgs/by-name/README.md`'s section on this topic](https://github.com/N
## Meta attributes
The `meta` attribute set should always be placed last in the derivativion and any other "meta"-like attribute sets like `passthru` should be written before it.
The `meta` attribute set should always be placed last in the derivation and any other "meta"-like attribute sets like `passthru` should be written before it.
* `meta.description` must:
* Be short, just one sentence.
@ -655,7 +655,7 @@ The latter avoids link rot when the upstream abandons, squashes or rebases their
{ patches = [ ./0001-add-missing-include.patch ]; }
```
If you do need to do create this sort of patch file, one way to do so is with git:
If you do need to create this sort of patch file, one way to do so is with git:
1. Move to the root directory of the source code you're patching.
@ -730,7 +730,7 @@ We use jbidwatcher as an example for a discontinued project here.
1. Create a pull request against Nixpkgs.
Mention the package maintainer.
This is how the pull request looks like in this case: [https://github.com/NixOS/nixpkgs/pull/116470](https://github.com/NixOS/nixpkgs/pull/116470)
This is what the pull request looks like in this case: [https://github.com/NixOS/nixpkgs/pull/116470](https://github.com/NixOS/nixpkgs/pull/116470)
## Package tests
@ -743,7 +743,7 @@ To run the main types of tests locally:
Tests are important to ensure quality and make reviews and automatic updates easy.
The following types of tests exists:
The following types of tests exist:
* [NixOS **module tests**](https://nixos.org/manual/nixos/stable/#sec-nixos-tests), which spawn one or more NixOS VMs.
They exercise both NixOS modules and the packaged programs used within them.
@ -1137,7 +1137,7 @@ Sample template for a package update review is provided below.
### New packages
New packages are a common type of pull requests.
These pull requests consist in adding a new nix-expression for a package.
These pull requests consist of adding a new nix-expression for a package.
Review process:
@ -1146,7 +1146,7 @@ Review process:
- Ensure that the package versioning [fits the guidelines](#versioning).
- Ensure that the commit text [fits the guidelines](../CONTRIBUTING.md#commit-conventions).
- Ensure that the source is fetched from an official location, one of our [trusted mirrors](./build-support/fetchurl/mirrors.nix), or a mirror trusted by the authors.
- Ensure that the meta fields [fits the guidelines](#meta-attributes) and contain the correct information:
- Ensure that the meta fields [fit the guidelines](#meta-attributes) and contain the correct information:
- License must match the upstream license.
- Platforms should be set (or the package will not get binary substitutes).
- Maintainers must be set.
@ -1250,7 +1250,7 @@ Note that there can be an extra comment containing links to previously reported
#### Triaging and Fixing
**Note**: An issue can be a "false positive" (i.e. automatically opened, but without the package it refers to being actually vulnerable).
If you find such a "false positive", comment on the issue an explanation of why it falls into this category, linking as much information as the necessary to help maintainers double check.
If you find such a "false positive", comment on the issue an explanation of why it falls into this category, linking as much information as necessary to help maintainers double check.
If you are investigating a "true positive":

View file

@ -8,9 +8,9 @@ The jdk is in `pkgs/development/compilers/jetbrains-jdk`.
## How to use plugins:
- Pass your IDE package and a list of plugin packages to `jetbrains.plugins.addPlugins`.
E.g. `pkgs.jetbrains.plugins.addPlugins pkgs.jetbrains.idea [ ideavim ]`
- The list has to contain contain drvs giving the directory contents of the plugin or a single `.jar` (executable).
- The list has to contain drvs giving the directory contents of the plugin or a single `.jar` (executable).
Nixpkgs does not package Jetbrains plugins, however you can use third-party sources, such as
Nixpkgs does not package JetBrains plugins, however you can use third-party sources, such as
[nix-jetbrains-plugins](https://github.com/nix-community/nix-jetbrains-plugins).
Note that some plugins may not work without modification, if they are packaged in a way that is incompatible with NixOS.
You can try installing such plugins from within the IDE instead.
@ -57,7 +57,7 @@ Any comments or other manual changes between these markers will be removed when
### TODO:
- drop the community IDEs
- Switch `mkJetbrainsProduct` to use `lib.extendMkDerivation`, see also:
- Switch `mkJetBrainsProduct` to use `lib.extendMkDerivation`, see also:
- https://github.com/NixOS/nixpkgs/pull/475183#discussion_r2655305961
- https://github.com/NixOS/nixpkgs/pull/475183#discussion_r2655348886
- move PyCharm overrides to a common place outside of `default.nix`
@ -69,7 +69,7 @@ Any comments or other manual changes between these markers will be removed when
- from source builds:
- remove timestamps in output `.jar` of `jps-bootstrap`
- automated update scripts
- fetch `.jar` s from stuff built in nixpkgs when available
- fetch `.jar`s from stuff built in nixpkgs when available
- what stuff built in nixpkgs provides `.jar`s we care about?
- kotlin
- make `configurePhase` respect `$NIX_BUILD_CORES`

View file

@ -9,7 +9,7 @@
* Currently `nixfmt-rfc-style` formatter is being used to format the VSCode extensions.
* Respect `alphabetical order` whenever adding extensions. On disorder, please, kindly open a PR re-establishing the order.
* Respect `alphabetical order` whenever adding extensions. If out of order, please kindly open a PR re-establishing the order.
* Avoid [unnecessary](https://nix.dev/guides/best-practices.html#with-scopes) use of `with`, particularly `nested with`.
@ -27,7 +27,7 @@
- maintainers are listed in alphabetical order.
- verify `license` in upstream.
* On commit messages:
* Commit messages:
- Naming convention for:
- Adding a new extension:

View file

@ -9,8 +9,8 @@ The basic steps to add a new core are:
1. Add a new core using `mkLibretroCore` function (use one of the existing
cores as an example)
2. Add your new core to [`default.nix`](./default.nix) file
3. Try to build your core with `nix-build -A libretro.<core>`
2. Add your new core to [`default.nix`](./default.nix) file.
3. Try to build your core with `nix-build -A libretro.<core>`.
## Using RetroArch with cores

View file

@ -11,8 +11,8 @@
- `ungoogled-chromium`: A patch set for Chromium, that has its own entry in Chromium's `upstream-info.nix`.
- `chromedriver`: Updated via Chromium's `upstream-info.nix` and not built
from source. Must match Chromium's major version.
- `electron-source`: Various versions of electron that are built from source using Chromium's
`-unwrapped` derivation, due to electron being based on Chromium.
- `electron-source`: Various versions of Electron that are built from source using Chromium's
`-unwrapped` derivation, due to Electron being based on Chromium.
# Upstream links

View file

@ -1,6 +1,6 @@
# K3s
K3s is a simplified [Kubernetes](https://wiki.nixos.org/wiki/Kubernetes) version that bundles Kubernetes cluster components into a few small binaries optimized for Edge and IoT devices.
K3s is a simplified [Kubernetes](https://wiki.nixos.org/wiki/Kubernetes) distribution that bundles Kubernetes cluster components into a few small binaries optimized for Edge and IoT devices.
## Usage
@ -8,8 +8,8 @@ K3s is a simplified [Kubernetes](https://wiki.nixos.org/wiki/Kubernetes) version
## Configuration Examples
* [Nvidia GPU Passthru](docs/examples/NVIDIA.md)
* [Intel GPU Passthru](docs/examples/INTEL.md)
* [Nvidia GPU Passthrough](docs/examples/NVIDIA.md)
* [Intel GPU Passthrough](docs/examples/INTEL.md)
* [Storage Examples](docs/examples/STORAGE.md)
## Cluster Maintenance and Troubleshooting

View file

@ -7,27 +7,27 @@ General documentation for the K3s user for cluster tasks and troubleshooting ste
### Changing K3s Token
Changing the K3s token requires resetting cluster. To reset the cluster, you must do the following:
Changing the K3s token requires resetting the cluster. To reset the cluster, you must do the following:
#### Stopping K3s
Disabling K3s NixOS module won't stop K3s related dependencies, such as containerd or networking. For stopping everything, either run "k3s-killall.sh" script (available on $PATH under `/run/current-system/sw/bin/k3s-killall.sh`) or reboot host.
Disabling the K3s NixOS module won't stop K3s related dependencies, such as containerd or networking. To stop everything, either run "k3s-killall.sh" script (available on $PATH under `/run/current-system/sw/bin/k3s-killall.sh`) or reboot the host.
### Syncing K3s in multiple hosts
Nix automatically syncs hosts to `configuration.nix`, for syncing configuration.nix's git repository and triggering `nixos-rebuild switch` in multiple hosts, it is commonly used `ansible`, which enables automation of cluster provisioning, upgrade and reset.
Nix automatically syncs hosts to `configuration.nix`. To sync `configuration.nix`'s git repository and trigger `nixos-rebuild switch` on multiple hosts, `ansible` is commonly used, which enables automation of cluster provisioning, upgrade and reset.
### Cluster Reset
As upstream "k3s-uninstall.sh" is yet to be packaged for NixOS, it's necessary to run manual steps for resetting cluster.
As upstream "k3s-uninstall.sh" is yet to be packaged for NixOS, it's necessary to run manual steps for resetting the cluster.
Disable K3s instances in **all** hosts:
Disable K3s instances on **all** hosts:
In NixOS configuration, set:
```
services.k3s.enable = false;
```
Rebuild NixOS. This is going to remove K3s service files. But it won't delete K3s data.
Rebuild the NixOS configuration. This is going to remove K3s service files. But it won't delete K3s data.
To delete K3s files:
@ -43,7 +43,7 @@ Delete k3s data:
```
When using Etcd, Reset Etcd:
Certify **all** K3s instances are stopped, because a single instance can re-seed etcd database with previous cryptographic key.
Ensure **all** K3s instances are stopped, because a single instance can re-seed etcd database with previous cryptographic key.
Disable etcd database in NixOS configuration:
```
@ -55,22 +55,22 @@ Delete etcd files:
```
rm -rf /var/lib/etcd/
```
Reboot hosts.
Reboot the hosts.
In NixOS configuration:
```
Re-enable Etcd first. Rebuild NixOS. Certify service health. (systemctl status etcd)
Re-enable K3s second. Rebuild NixOS. Certify service health. (systemctl status k3s)
```
Etcd & K3s cluster will be provisioned new.
```
Re-enable Etcd first. Rebuild NixOS. Verify service health. (systemctl status etcd)
Re-enable K3s second. Rebuild NixOS. Verify service health. (systemctl status k3s)
```
The Etcd & K3s cluster will be provisioned anew.
Tip: Use Ansible to automate reset routine, like this.
## Troubleshooting
### Raspberry Pi not working
If the k3s.service/k3s server does not start and gives you the error FATA[0000] failed to find memory cgroup (v2) Here's the github issue: https://github.com/k3s-io/k3s/issues/2067 .
If the k3s.service/k3s server does not start and gives you the error FATA[0000] failed to find memory cgroup (v2) Here's the GitHub issue: https://github.com/k3s-io/k3s/issues/2067 .
To fix the problem, you can add these things to your configuration.nix.
```

View file

@ -1,8 +1,8 @@
# Onboarding Maintainer
Anyone willing can become a maintainer, no pre-requisite knowledge is required. Willingness to learn is enough.
Anyone willing can become a maintainer, no prerequisite knowledge is required. Willingness to learn is enough.
A K3s maintainer, maintains K3s's:
A K3s maintainer maintains K3s's:
- [documentation](https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/networking/cluster/k3s/README.md)
- [issues](https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+k3s)
@ -24,7 +24,7 @@ Only consensus is required to move forward any proposal. Consensus meaning the a
If you cause a regression (we've all been there), you are responsible for fixing it, but in case you can't fix it (it happens), feel free to ask for help. That's fine, just let us know.
To merge code, you need to be a committer, or use the merge-bot, but currently the merge-bot only works for packages located at `pkgs/by-name/`, which means, K3s still need to be migrated there before you can use merge-bot for merging. As a non-committer, once you have approved a PR you need to forward the request to a committer. For deciding which committer, give preference initially to K3s committers, but any committer can commit. A committer usually has a green approval in PRs.
To merge code, you need to be a committer, or use the merge-bot, but currently the merge-bot only works for packages located at `pkgs/by-name/`, which means, K3s still needs to be migrated there before you can use merge-bot for merging. As a non-committer, once you have approved a PR you need to forward the request to a committer. For deciding which committer, give preference initially to K3s committers, but any committer can commit. A committer usually has a green approval in PRs.
K3s's committers currently are: marcusramberg, Mic92.
@ -32,11 +32,11 @@ K3s's committers currently are: marcusramberg, Mic92.
@mic92 stepped up when @superherointj stepped down a time ago, as Mic92 has a broad responsibility in nixpkgs (he is responsible for far too many things already, nixpkgs-reviews, sops-nix, release manager, bot-whatever), we avoid giving him chore work for `nixos-unstable`, only pick him as committer last. As Mic92 runs K3s in a `nixos-stable` setting, he might help in testing stable backports.
On how to handle requests, it's the usual basics, such as, when reviewing PRs, issues, be welcoming, helpful, provide hints whenever possible, try to move things forward, assume good will, ignore [as don't react to] any negativity [since it spirals badly], delay and sort any (severe) disagreement in private. Even on disagrements, be thankful to people for their dedicated time, no matter what happens. In essence, on any unfortunate event, **always put people over code**.
On how to handle requests, it's the usual basics, such as, when reviewing PRs, issues, be welcoming, helpful, provide hints whenever possible, try to move things forward, assume good will, ignore [as don't react to] any negativity [since it spirals badly], delay and sort any (severe) disagreement in private. Even on disagreements, be thankful to people for their dedicated time, no matter what happens. In essence, on any unfortunate event, **always put people over code**.
Dumbshit happens, we make mistakes, the CI, reviews, fellow maintainers are there to nudge us on a better direction, no need to over think interactions, if a problem happens, we'll handle it.
Dumbshit happens, we make mistakes, the CI, reviews, fellow maintainers are there to nudge us on a better direction, no need to overthink interactions, if a problem happens, we'll handle it.
We should optimize for maintainers satisfaction, because it is maintainers that make the service great. The best kind of win we have is when someone new steps up for being a maintainer. This multiplies our capabilities of doing meaningful work and increases our knowledge pool.
We should optimize for maintainer satisfaction, because it is maintainers that make the service great. The best kind of win we have is when someone new steps up for being a maintainer. This multiplies our capabilities of doing meaningful work and increases our knowledge pool.
Know that your participation matters most for us. And we thank you for stepping up. It's good to have you here!

View file

@ -12,7 +12,7 @@ This process split into two sections and adheres to the versioning policy outlin
* Prior to the breaking change window of the next release being closed:
* `nixos-unstable`: Ensure k3s points to latest versioned release
* `nixos-unstable`: Ensure release notes are up to date
* `nixos-unstable`: Remove k3s releases which will be end of life upstream prior to end-of-life for the next NixOS stable release are removed with proper deprecation notice (process listed below)
* `nixos-unstable`: Remove k3s releases which will be end of life upstream prior to end-of-life for the next NixOS stable release, with proper deprecation notice (process listed below)
### Post-Release
@ -53,7 +53,7 @@ Package removal policy and timelines follow our reasoning in the [versioning doc
Quick checklist for reviewers of the k3s package:
* Is the version of the Go compiler pinned according to the go.mod file for the release?
* Update script will not pin nor change the go version.
* The update script will not pin nor change the Go version.
* Do the K3s passthru.tests work for all architectures supported? (linux-x86_64, aarch64-linux)
* For GitHub CI, [OfBorg](https://github.com/NixOS/ofborg) can be used to test all platforms.
* For Local testing, the following can be run in nixpkgs root on the upgrade branch: `nix build .#k3s_1_29.passthru.tests.{etcd,single-node,multi-node}` (Replace "29" to the version tested)

View file

@ -25,7 +25,7 @@ Multi-node setup
## Multi-Node
it is simple to create a cluster of multiple nodes in a highly available setup (all nodes are in the control-plane and are a part of the etcd cluster).
It is simple to create a cluster of multiple nodes in a highly available setup (all nodes are in the control-plane and are a part of the etcd cluster).
The first node is configured like this:
@ -62,7 +62,7 @@ Tip: If you run into connectivity issues between nodes for specific applications
### `prefer-bundled-bin`
K3s has a config setting `prefer-bundled-bin` (and CLI flag `--prefer-bundled-bin`) that makes k3s use binaries from the `/var/lib/rancher/k3s/data/current/bin/aux/` directory, as unpacked by the k3s binary, before the system `$PATH`.
This works with the official distribution of k3s but not with the package from nixpkgs, as it does not bundle the upstream binaries from [`k3s-root`](https://github.com/k3s-io/k3s-root) into the k3s binary.
This works with the official distribution of k3s but not with the package from Nixpkgs, as it does not bundle the upstream binaries from [`k3s-root`](https://github.com/k3s-io/k3s-root) into the k3s binary.
Thus the `prefer-bundled-bin` setting **cannot** be used to work around issues (like [this `mount` regression](https://github.com/util-linux/util-linux/issues/3474)) with binaries used/called by the kubelet.
### Building from a different source

View file

@ -1,12 +1,12 @@
# Versioning
K3s, Kubernetes, and other clustered software has the property of not being able to update atomically. Most software in nixpkgs, like for example bash, can be updated as part of a "nixos-rebuild switch" without having to worry about the old and the new bash interacting in some way.
K3s, Kubernetes, and other clustered software have the property of not being able to update atomically. Most software in Nixpkgs, like for example bash, can be updated as part of a "nixos-rebuild switch" without having to worry about the old and the new bash interacting in some way.
K3s/Kubernetes, on the other hand, is typically run across several NixOS machines, and each NixOS machine is updated independently. As such, different versions of the package and NixOS module must maintain compatibility with each other through temporary version skew during updates.
The upstream Kubernetes project [documents this in their version-skew policy](https://kubernetes.io/releases/version-skew-policy/#supported-component-upgrade-order).
Within nixpkgs, we strive to maintain a valid "upgrade path" that does not run
Within Nixpkgs, we strive to maintain a valid "upgrade path" that does not run
afoul of the upstream version skew policy.
## Patch Release Support Lifecycle
@ -15,11 +15,11 @@ K3s is built on top of K8s and typically provides a similar release cadence and
In short, a new Kubernetes version is released roughly every 4 months and each release is supported for a little over 1 year.
## Versioning in nixpkgs
## Versioning in Nixpkgs
There are two package types that are maintained within nixpkgs when we are looking at the `nixos-unstable` branch. A standard `k3s` package and versioned releases such as `k3s_1_28`, `k3s_1_29`, and `k3s_1_30`.
There are two package types that are maintained within Nixpkgs when we are looking at the `nixos-unstable` branch. A standard `k3s` package and versioned releases such as `k3s_1_28`, `k3s_1_29`, and `k3s_1_30`.
The standard `k3s` package will be updated as new versions of k3s are released upstream. Versioned releases, on the other hand, will follow the path release support lifecycle as detailed in the previous section and be removed from `nixos-unstable` when they are either end-of-life upstream or older than the current `k3s` package in `nixos-stable`.
The standard `k3s` package will be updated as new versions of k3s are released upstream. Versioned releases, on the other hand, will follow the patch release support lifecycle as detailed in the previous section and be removed from `nixos-unstable` when they are either end-of-life upstream or older than the current `k3s` package in `nixos-stable`.
## Versioning in NixOS Releases

View file

@ -5,7 +5,7 @@ This article makes the following assumptions:
2. The Linux kernel running is modern enough to support your GPU out of the box
3. The desired driver is `i915` -- modify as needed for other drivers
> Note: at the time of writing, the author was using an Intel Arc A770 in k3s. The majority of this guide likely should work on other Kubernetes distributions, and will likely work identically for integrated graphics capabilities.
> Note: at the time of writing, the author was using an Intel Arc A770 in k3s. The majority of this guide should work on other Kubernetes distributions, and will likely work identically for integrated graphics capabilities.
### Enable the Intel driver in NixOS
@ -15,13 +15,13 @@ Add the following NixOS configuration to enable the Intel driver (necessary on h
services.xserver.videoDrivers = [ "i915" ];
```
After rebuilding the configuration, reboot the host for the GPU driver to be assigned to the GPU. Use the following command to ensure the GPU is using the i915 kernel:
After rebuilding the configuration, reboot the host for the driver to be assigned to the GPU. Use the following command to ensure the GPU is using the i915 kernel:
```
sudo lspci -k
```
i.e. the output looks like this on a host with the Intel Arc A770:
For example, the output looks like this on a host with the Intel Arc A770:
```
sudo lspci -k | grep -A 3 'Arc'
@ -117,7 +117,7 @@ Verify the number has been applied like so:
kubectl get nodes -o yaml | grep gpu.intel.com/i915 | sort -u
```
i.e. in this configuration, up to 10 pods can use the GPU:
For example, in this configuration, up to 10 pods can use the GPU:
```
kubectl get nodes -o yaml | grep gpu.intel.com/i915 | sort -u

View file

@ -58,7 +58,7 @@ Additionally, `lspci -k` can be used to ensure the driver has been assigned to t
## Configure k3s
You now need to create a new file in `/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl` with the following
You now need to create a new file in `/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl` with the following:
```
{{ template "base" . }}

View file

@ -14,7 +14,7 @@ services.openiscsi = {
};
```
Longhorn container has trouble with NixOS path. Solution is to override PATH environment variable, such as:
The Longhorn container has trouble with the NixOS path. Solution is to override PATH environment variable, such as:
```
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/run/wrappers/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin
@ -42,7 +42,7 @@ metadata:
policies.kyverno.io/category: Other
policies.kyverno.io/description: >-
Longhorn invokes executables on the host system, and needs
to be aware of the host systems PATH. This modifies all
to be aware of the host system's PATH. This modifies all
deployments such that the PATH is explicitly set to support
NixOS based systems.
spec:

View file

@ -1,7 +1,7 @@
# RKE2 Version
RKE2, Kubernetes, and other clustered software has the property of not being able to update
atomically. Most software in nixpkgs, like for example bash, can be updated as part of a
RKE2, Kubernetes, and other clustered software have the property of not being able to update
atomically. Most software in Nixpkgs, like for example bash, can be updated as part of a
`nixos-rebuild switch` without having to worry about the old and the new bash interacting in some
way. RKE2/Kubernetes, on the other hand, is typically run across several machines, and each machine
is updated independently. As such, different versions of the package and NixOS module must maintain
@ -9,7 +9,7 @@ compatibility with each other through temporary version skew during updates. The
project documents this in their
[version-skew policy](https://kubernetes.io/releases/version-skew-policy/#supported-component-upgrade-order).
Within nixpkgs, we strive to maintain a valid "upgrade path" that does not run afoul of the upstream
Within Nixpkgs, we strive to maintain a valid "upgrade path" that does not run afoul of the upstream
version skew policy.
> [!NOTE]
@ -18,13 +18,13 @@ version skew policy.
## Release Maintenance
This section describes how new RKE2 releases are published in nixpkgs.
This section describes how new RKE2 releases are published in Nixpkgs.
Before contributing new RKE2 packages or updating existing packages, make sure that
- New packages build (e.g. `nix-build -A rke2_1_34`)
- All tests pass (e.g. `nix-build -A rke2_1_34.tests`)
- You respect the nixpkgs [contributing guidelines](/CONTRIBUTING.md)
- You respect the Nixpkgs [contributing guidelines](/CONTRIBUTING.md)
### Release Channels
@ -95,7 +95,7 @@ In order to remove a versioned RKE2 package, create a PR achieving the following
[pkgs/top-level/all-packages.nix](/pkgs/top-level/all-packages.nix)
4. Add a deprecation notice in [pkgs/top-level/aliases.nix](/pkgs/top-level/aliases.nix)
- Such as
`rke2_1_34 = throw "'rke2_1_34' has been removed from nixpkgs as it has reached end of life"; # Added 2026-10-27`
`rke2_1_34 = throw "'rke2_1_34' has been removed from Nixpkgs as it has reached end of life"; # Added 2026-10-27`
#### Handling EOL on stable

View file

@ -28,11 +28,11 @@ Updating is done in 3 steps:
## Adding new libraries
To add a new package to this scope, simply add a new subdirectory containing a `default.nix` file with the appropriate package name. The scope automatically picks up any directories and adds an according toplevel package.
To add a new package to this scope, simply add a new subdirectory containing a `default.nix` file with the appropriate package name. The scope automatically picks up any directories and adds a corresponding toplevel package.
If the package you are adding is contained within the `linphone-sdk` monorepo, it makes sense to use the `mkLinphoneDerivation` function to streamline the build process.
If the package you are adding is a third-party libary with custom patches from BC, it should be prefixed with `bc-` for easy recognizability, so e.g. if BC were to patch `ffmpeg`, you would call the package `bc-ffmpeg`.
If the package you are adding is a third-party library with custom patches from BC, it should be prefixed with `bc-` for easy recognizability, so e.g. if BC were to patch `ffmpeg`, you would call the package `bc-ffmpeg`.
## Notes for the future

View file

@ -3,11 +3,11 @@
Go promises that "programs written to the Go 1 specification will continue to compile and run correctly, unchanged, over the lifetime of that specification" [1].
Newer toolchain versions should build projects developed against older toolchains without problems.
**Definition(a "toolchain-breaking" package):**
**Definition (a "toolchain-breaking" package):**
There are however Go packages depending on internal APIs of the toolchain/runtime/stdlib that are not covered by the Go compatibility promise.
These packages may break on toolchain minor version upgrades.
**Definition(a "toolchain-latest" package):**
**Definition (a "toolchain-latest" package):**
Packages providing development support for the Go language (like `gopls`, `golangci-lint`,...) depend on the toolchain in another way: they must be compiled at least with the version they should be used for.
If `gopls` is compiled for Go 1.23, it won't work for projects that require Go 1.24.
@ -21,13 +21,13 @@ Based on this, we align on the following policy for toolchain/builder upgrades f
2. The `go_latest` toolchain and the `buildGoLatestModule` are also bumped directly after release, but the update goes to the `master` branch.
Packages in `toolchain-latest` SHOULD use `go_latest`/`buildGoLatestModule`.
Packages in nixpkgs MUST only use this toolchain/builder if they have a good reason to do so
Packages in nixpkgs MUST only use this toolchain/builder if they have a good reason to do so.
A comment MUST be added explaining why this is the case for a certain package.
It is important to keep the number of packages using this builder within nixpkgs low, so the bump won't cause a mass rebuild.
`go_latest` MUST not point to release candidates of Go.
Consumer outside of nixpkgs on the other hand MAY rely on this toolchain/builder if they prefer being upgraded earlier to the newest toolchain minor version.
Consumers outside of nixpkgs on the other hand MAY rely on this toolchain/builder if they prefer being upgraded earlier to the newest toolchain minor version.
3. Packages in `toolchain-breaking` SHOULD pin a toolchain version by using a builder with a fixed Go version (`buildGo1xxModule`).
The use of `buildGo1xxModule` MUST be accompanied with a comment explaining why this has a dependency on a specific Go version.
@ -40,7 +40,7 @@ Based on this, we align on the following policy for toolchain/builder upgrades f
When an end-of-life toolchain is removed, builders that pin the EOL version (according to 3.) will automatically be bumped to the then oldest pinned builder (e.g. Go 1.22 is EOL, `buildGo122Module` is bumped to `buildGo123Module`).
If the package won't build with that builder anymore, the package is marked broken.
It is the package maintainers responsibility to fix the package and get it working with a supported Go toolchain.
It is the package maintainer's responsibility to fix the package and get it working with a supported Go toolchain.
For the stable release, we recognize that (1) removing a Go version, or updating the `go_latest` or `go` packages to a new Go minor release, would be a breaking change, and (2) some packages will need backports (e.g. for security reasons) that require the latest Go version.
Therefore, on the stable release, new Go versions will be backported to the `release-2x.xx` branch, but the old versions will remain, and `go`, `buildGoModule`, `go_latest`, and `buildGoLatestModule` will remain unchanged.

View file

@ -74,7 +74,7 @@ initrd to a minimum.
in less than a second, and the code is substantially easier to work
with.
- This will not require end users to install a rust toolchain to use
- This will not require end users to install a Rust toolchain to use
NixOS, as long as this tool is cached by Hydra. And if you're
bootstrapping NixOS from source, rustc is already required anyway.

View file

@ -64,7 +64,7 @@ The above expression is called using these arguments by default:
But the package might need `pkgs.libbar_2` instead.
While the `libbar` argument could explicitly be overridden in `all-packages.nix` with `libbar_2`, this would hide important information about this package from its interface.
The fact that the package requires a certain version of `libbar` to work should not be hidden in a separate place.
It is preferable to use `libbar_2` as a argument name instead.
It is preferable to use `libbar_2` as an argument name instead.
This approach also has the benefit that, if the expectation of the package changes to require a different version of `libbar`, a downstream user with an override of this argument will receive an error.
This is comparable to a merge conflict in git: It's much better to be forced to explicitly address the conflict instead of silently keeping the override - which might lead to a different problem that is likely much harder to debug.

View file

@ -4,7 +4,7 @@
- Update `version` and `src.hash` in package.nix
- Check out the changes made to the azure-cli [setup.py](https://github.com/Azure/azure-cli/blob/dev/src/azure-cli/setup.py) since the last release
- Try build the CLI, will likely fail with `ModuleNotFoundError`, for example
- Try to build the CLI, will likely fail with `ModuleNotFoundError`, for example
```
ModuleNotFoundError: No module named 'azure.mgmt.storage.v2023_05_01'
```
@ -21,7 +21,7 @@
There are two sets of extensions:
- `extensions-generated.nix` are extensions with no external requirements, which can be regenerated running:
- `extensions-generated.nix` are extensions with no external requirements, which can be regenerated by running:
> nix run .#azure-cli.passthru.generate-extensions
- `extensions-manual.nix` are extensions with requirements, which need to be manually packaged and maintained.

View file

@ -1,6 +1,6 @@
# `devmode`
`devmode` is a daemon, that:
`devmode` is a daemon that:
1. watches the manual's source for changes and when they occur — rebuilds
2. HTTP serves the manual, injecting a script that triggers reload on changes
3. opens the manual in the default browser

View file

@ -19,7 +19,7 @@ After every NixOS release, the unsupported etcd versions should be removed by et
## User guidelines on etcd upgrades
Before upgrading a NixOS release, certify to upgrade etcd to the latest version in the current used release.
Before upgrading a NixOS release, make sure to upgrade etcd to the latest version in the current used release.
Manual steps might be required for the upgrade.

View file

@ -2,12 +2,12 @@ This directory contains a vendored copy of `games.json`, along with tooling to g
## Purpose
The games data is fetched at runtime by NexusMods.App, however it is also included at build time for two reasons:
The games data is fetched at runtime by NexusMods.App, however, it is also included at build time for two reasons:
1. It allows tests to run against real data.
2. It is used as cached data, speeding up the app's initial run.
It is not vital for the file to contain all games, however ideally it should contain all games _supported_ by this version of NexusMods.App.
It is not vital for the file to contain all games, however, ideally it should contain all games _supported_ by this version of NexusMods.App.
That way the initial run's cached data is more useful.
If this file grows too large, because we are including too many games, we can patch the `csproj` build spec so that `games.json` is not used at build time.

View file

@ -29,7 +29,7 @@ robust.
writing correct software easier and should improve the quality of the NixOS
boot code.
- Most things can be started much later than one might assume. Because systemd
services are parallelized, this should improve start up time.
services are parallelized, this should improve startup time.
## Invariants
@ -50,7 +50,7 @@ closure. Currently nixos-init comes in at ~500 KiB.
- `initrd-init`: Initializes the system on boot, setting up the tree for
systemd to start.
- `find-etc`: Finds the `/etc` paths in `/sysroot` so that the initrd doesn't
directly depend on the toplevel reducing the need to rebuild the initrd on
directly depend on the toplevel, reducing the need to rebuild the initrd on
every generation.
- `chroot-realpath`: Figures out the canonical path inside a chroot.

View file

@ -13,7 +13,7 @@ Maintaining our own documentation rendering framework may appear extreme but has
- The amount of code involved is minimal because it's single-purpose
Several alternatives to `nixos-render-docs` were discussed in the past.
A detailed analysis can be found in a [table comparing documentation rendering framework](https://ethercalc.net/dc4vcnnl8zv0).
A detailed analysis can be found in a [table comparing documentation rendering frameworks](https://ethercalc.net/dc4vcnnl8zv0).
## Redirects system
@ -78,7 +78,7 @@ In case this identifier is renamed, the mapping would change into:
## Rendering multiple pages
The `include` directive accepts an argument `into-file` to specify the file into which the imported markdown should be rendered to. We can use this argument to set up multipage rendering of the manuals.
The `include` directive accepts an argument `into-file` to specify the file into which the imported markdown should be rendered. We can use this argument to set up multipage rendering of the manuals.
For example

View file

@ -29,7 +29,7 @@ Then make the needed changes and generate a patch with `git diff`:
[user@localhost ~]$ git diff -u > /path/to/nixpkgs/pkgs/by-name/sa/sage/patches/name-of-patch.patch
```
Now just add the patch to `sage-src.nix` and test your changes. If they fix the problem, submit a PR upstream (refer to sages [Developer's Guide](http://doc.sagemath.org/html/en/developer/index.html) for further details).
Now just add the patch to `sage-src.nix` and test your changes. If they fix the problem, submit a PR upstream (refer to Sage's [Developer's Guide](http://doc.sagemath.org/html/en/developer/index.html) for further details).
- pin the package version in `default.nix` and add a note that explains why that is necessary.

View file

@ -1,32 +1,32 @@
# GNOME Shell extensions
All extensions are packaged automatically. They can be found in the `pkgs.gnomeXYExtensions` for XY being a GNOME version. The package names are the extensions UUID, which can be a bit unwieldy to use. `pkgs.gnomeExtensions` is a set of manually curated extensions that match the current `pkgs.gnome-shell` versions. Their name is human-friendly, compared to the other extensions sets. Some of its extensions are manually packaged.
All extensions are packaged automatically. They can be found in the `pkgs.gnomeXYExtensions` where XY is a GNOME version. The package names are the extensions' UUIDs, which can be a bit unwieldy to use. `pkgs.gnomeExtensions` is a set of manually curated extensions that match the current `pkgs.gnome-shell` versions. Their names are human-friendly, compared to the other extension sets. Some of its extensions are manually packaged.
## Automatically packaged extensions
The actual packages are created by `buildGnomeExtension.nix`, provided the correct arguments are fed into it. The important extension data is stored in `extensions.json`, one line/item per extension. That file is generated by running `update-extensions.py`. Furthermore, the automatic generated names are dumped in `collisions.json` for manual inspection. `extensionRenames.nix` contains new names for all extensions that collide.
The actual packages are created by `buildGnomeExtension.nix`, provided the correct arguments are fed into it. The important extension data is stored in `extensions.json`, one line/item per extension. That file is generated by running `update-extensions.py`. Furthermore, the automatically generated names are dumped in `collisions.json` for manual inspection. `extensionRenames.nix` contains new names for all extensions that collide.
### Extensions updates
#### For everyday updates,
#### For everyday updates:
1. Run `update-extensions.py`.
2. Update `extensionRenames.nix` according to the comment at the top.
#### To package the extensions for new GNOME version,
#### To package the extensions for a new GNOME version:
1. Add a new `gnomeXYExtensions` set in `default.nix`.
2. Update `all-packages.nix` accordingly. (grep for `gnomeExtensions`)
3. Update `supported_versions` in `update-extensions.py`.
4. Follow the [For everyday updates](#for-everyday-updates) section.
#### For GNOME updates,
#### For GNOME updates:
1. Follow the [To package the extensions for new GNOME version](#to-package-the-extensions-for-new-gnome-version) section if required.
1. Follow the [To package the extensions for new GNOME version](#to-package-the-extensions-for-new-gnome-version) section, if required.
2. Update `versions_to_merge` variable in `./update-extensions.py`.
3. Run `update-extensions.py --skip-fetch`, and update `extensionRenames.nix` according to the comment at the top.
4. Update `gnomeExtensions` in `default.nix` to the new versions.
## Manually packaged extensions
Manually packaged extensions overwrite some of the automatically packaged ones in `pkgs.gnomeExtensions`. They are listed in `manuallyPackaged.nix`, every extension has its own sub-folder.
Manually packaged extensions overwrite some of the automatically packaged ones in `pkgs.gnomeExtensions`. They are listed in `manuallyPackaged.nix`; every extension has its own sub-folder.

View file

@ -6,7 +6,7 @@ Modify revision in ./update.sh and run it
The elm binary embeds a piece of pre-compiled elm code, used by 'elm
reactor'. This means that the build process for 'elm' effectively
executes 'elm make'. that in turn expects to retrieve the elm
executes 'elm make'. That in turn expects to retrieve the elm
dependencies of that code (elm/core, etc.) from
package.elm-lang.org, as well as a cached bit of metadata
(versions.dat).

View file

@ -7,6 +7,6 @@ Mixtures of useful Elm lang tooling containing both Haskell and Node.js based ut
Haskell parts of the ecosystem are using [cabal2nix](https://github.com/NixOS/cabal2nix).
Please refer to [nix documentation](https://nixos.org/nixpkgs/manual/#how-to-create-nix-builds-for-your-own-private-haskell-packages)
and [cabal2nix readme](https://github.com/NixOS/cabal2nix#readme) for more information. Elm-format [update scripts](https://github.com/avh4/elm-format/tree/master/package/nix)
is part of its repository.
are part of its repository.
Node dependencies are defined with [`buildNpmPackage`](https://nixos.org/manual/nixpkgs/stable/#javascript-buildNpmPackage).

View file

@ -1,7 +1,7 @@
## How to upgrade llvm_git
- Run `update-git.py`.
This will set the github revision and sha256 for `llvmPackages_git.llvm` to whatever the latest chromium build is using.
This will set the GitHub revision and sha256 for `llvmPackages_git.llvm` to whatever the latest chromium build is using.
For a more recent commit, run `nix-prefetch-github` and change the rev and sha256 accordingly.
- That was the easy part.
@ -41,12 +41,12 @@
The lines above show us that the `purity.patch` failed on `lib/Driver/ToolChains/Gnu.cpp` when compiling `clang`.
3. The task now is to cross reference the hunks in the purity patch with
`lib/Driver/ToolCahins/Gnu.cpp.orig` to see why the patch failed.
`lib/Driver/ToolChains/Gnu.cpp.orig` to see why the patch failed.
The `.orig` file will be in the build directory referenced in the line `note: keeping build directory ...`;
this message results from the `--keep-failed` flag.
4. Now you should be able to open whichever patch failed, and the `foo.orig` file that it failed on.
Correct the patch by adapting it to the new code and be mindful of whitespace;
Correct the patch by adapting it to the new code and be mindful of whitespace,
which can be an easily missed reason for failures.
For cases where the hunk is no longer needed you can simply remove it from the patch.

View file

@ -42,7 +42,7 @@ not straightforward to include. These packages are:
- `libnvidia_nscq`: NVSwitch software
- `libnvsdm`: NVSwitch software
- `cublasmp`:
- `libcublasmp`: `nvshmem` isnt' packaged.
- `libcublasmp`: `nvshmem` isn't packaged.
- `cudnn`:
- `cudnn_samples`: requires FreeImage, which is abandoned and not packaged.
@ -91,9 +91,9 @@ sandbox when building, which can't find those (a second minor issue is that
still take precedence).
The current solution is to do something similar to `addOpenGLRunpathHook`: the
`addCudaCompatRunpathHook` prepends to the path to `cuda_compat`'s `libcuda.so`
`addCudaCompatRunpathHook` prepends the path to `cuda_compat`'s `libcuda.so`
to the `DT_RUNPATH` of whichever package includes the hook as a dependency, and
we include the hook by default for packages in `cudaPackages` (by adding it as a
inputs in `genericManifestBuilder`). We also make sure it's included after
we include the hook by default for packages in `cudaPackages` (by adding it as an
input in `genericManifestBuilder`). We also make sure it's included after
`addOpenGLRunpathHook`, so that it appears _before_ in the `DT_RUNPATH` and
takes precedence.

View file

@ -14,9 +14,9 @@ workflow.
The workflow generally proceeds in three main steps:
1. create the initial `haskell-updates` PR, and update Stackage and Hackage snapshots
1. wait for contributors to fix newly broken Haskell packages
1. merge `haskell-updates` into `staging`
1. Create the initial `haskell-updates` PR, and update Stackage and Hackage snapshots
1. Wait for contributors to fix newly broken Haskell packages
1. Merge `haskell-updates` into `staging`
Each of these steps is described in a separate section.
@ -106,7 +106,7 @@ always keep these building.
We should be proactive in working with maintainers to keep their packages
building.
Steps to fix Haskell packages that are failing to build is out of scope for
Steps to fix Haskell packages that are failing to build are out of scope for
this document, but it usually requires fixing up dependencies that are now
out-of-bounds.
@ -274,7 +274,7 @@ Here are some additional tips that didn't fit in above.
You might want to do this if a user contributes a fix to `cabal2nix` that
will immediately fix a Haskell package in Nixpkgs. First, merge in
the PR to `cabal2nix`, then run `update-cabal2nix-upstable.sh`. Finally, run
the PR to `cabal2nix`, then run `update-cabal2nix-unstable.sh`. Finally, run
[`regenerate-hackage-packages.sh`](../../../maintainers/scripts/haskell/regenerate-hackage-packages.sh)
to regenerate the Hackage package set with the updated version of `hackage2nix`.

View file

@ -3,8 +3,8 @@
catch_conflicts.py
==================
The file catch_conflicts.py is in a subdirectory because, if it isn't, the
/nix/store/ directory is added to sys.path causing a delay when building.
The file `catch_conflicts.py` is in a subdirectory because, if it isn't, the
`/nix/store/` directory is added to `sys.path`, causing a delay when building.
Pointers:

View file

@ -1,7 +1,7 @@
# Testing `julia.withPackages`
This folder contains a test suite for ensuring that the top N most popular Julia packages (as measured by download count) work properly. The key parts are
This folder contains a test suite for ensuring that the top N most popular Julia packages (as measured by download count) work properly. The key parts are:
* `top-julia-packages.nix`: an impure derivation for fetching Julia download data and processing it into a file called `top-julia-packages.yaml`. This YAML file contains an array of objects with fields "name", "uuid", and "count", and is sorted in decreasing order of count.
* `julia-top-n`: a small Haskell program which reads `top-julia-packages.yaml` and builds a `julia.withPackages` environment for each package, with a nice interactive display and configurable parallelism. It also tests whether evaluating `using <package-name>` works in the resulting environment.
@ -18,7 +18,7 @@ This folder contains a test suite for ensuring that the top N most popular Julia
## Options
You can run `./run_tests.sh --help` to see additional options for the test harness. The main ones are
You can run `./run_tests.sh --help` to see additional options for the test harness. The main ones are:
* `-n`/`--top-n`: how many of the top packages to build (default: 100).
* `-p`/`--parallelism`: how many builds to run at once (default: 10).

View file

@ -32,7 +32,7 @@ Each "solution" (k=v pair) in this attrset describes one resholve invocation.
> solutions to resolve the scripts separately, but produce a single package.
`resholve.writeScript` and `resholve.writeScriptBin` support a _single_
`solution` attrset. This is basically the same as any single solution in `resholve.mkDerivation`, except that it doesn't need a `scripts` attr (it is automatically added). `resholve.phraseSolution` also only accepts a single solution--but it _does_ still require the `scripts` attr.
`solution` attrset. This is basically the same as any single solution in `resholve.mkDerivation`, except that it doesn't need a `scripts` attr (it is automatically added). `resholve.phraseSolution` also only accepts a single solution, but it _does_ still require the `scripts` attr.
## Basic `resholve.mkDerivation` Example
@ -129,7 +129,7 @@ trivial, so I'll also link to some real-world examples:
## Basic `resholve.phraseSolution` example
This function has a similar API to `writeScript` and `writeScriptBin`, except it does require a `scripts` attr. It is intended to make resholve a little easier to mix into more types of build. This example is a little
This function has a similar API to `writeScript` and `writeScriptBin`, except it does require a `scripts` attr. It is intended to make resholve a little easier to mix into more types of builds. This example is a little
trivial for now. If you have a real usage that you find helpful, please PR it.
```nix
@ -210,7 +210,7 @@ handle any potential problems it encounters with directives. There are currently
- dynamic (variable) arguments to commands known to accept/run other commands
> NOTE: resholve has a (growing) number of directives detailed in `man resholve`
> via `nixpkgs.resholve` (though protections against run-time use of python2 in nixpkgs mean you'll have to set `NIXPKGS_ALLOW_INSECURE=1` to pull resholve into nix-shell).
> via `nixpkgs.resholve` (though protections against run-time use of Python 2 in Nixpkgs mean you'll have to set `NIXPKGS_ALLOW_INSECURE=1` to pull resholve into `nix-shell`).
Each of these 3 types is represented by its own attrset, where you can think
of the key as a scope. The value should be:

View file

@ -18,7 +18,7 @@ If your package uses native addons, you need to examine what kind of native buil
- `node-gyp-builder`
- `node-pre-gyp`
After you have identified the correct system, you need to override your package expression while adding in build system as a build input.
After you have identified the correct system, you need to override your package expression while adding the build system as a build input.
For example, `dat` requires `node-gyp-build`, so we override its expression in [pkgs/development/node-packages/overrides.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/overrides.nix):
```nix
@ -54,7 +54,7 @@ To add a package from npm to Nixpkgs:
nix-build -A nodePackages.<new-or-updated-package>
```
To build against the latest stable Current Node.js version (e.g. 18.x):
To build against the latest stable current Node.js version (e.g. 18.x):
```sh
nix-build -A nodePackages_latest.<new-or-updated-package>

View file

@ -1,7 +1,7 @@
# Python 2 is Not Supported
Packages, applications, and services based on Python 2 are no longer supported and are being removed. If you require a Python 2 based package, you can include that package in your own local repository.
Packages, applications, and services based on Python 2 are no longer supported and are being removed. If you require a Python 2-based package, you can include that package in your own local repository.
Some packages may continue to be maintained for internal use by nixpkgs, but they should not be used by new public packages.
Some packages may continue to be maintained for internal use by Nixpkgs, but they should not be used by new public packages.
For more details, see [Issue #201859](https://github.com/NixOS/nixpkgs/pull/201859).

View file

@ -2,7 +2,7 @@
## Introduction
Gradle build scripts are written in a DSL, computing the list of Gradle
Gradle build scripts are written in a DSL; computing the list of Gradle
dependencies is a Turing-complete task, not just in theory but also in
practice. Fetching all of the dependencies often requires building some
native code, running some commands to check the host platform, or just
@ -35,7 +35,7 @@ the Gradle derivation to access these files.
(Reference: [Repository
Layout](https://cwiki.apache.org/confluence/display/MAVENOLD/Repository+Layout+-+Final))
Most of Gradle dependencies are fetched from Maven repositories. For
Most Gradle dependencies are fetched from Maven repositories. For
each dependency, Gradle finds the first repo where it can successfully
fetch that dependency, and uses that repo for it. Different repos might
actually return different files for the same artifact because of e.g.
@ -128,7 +128,7 @@ Second, for figuring out where to download the snapshot, Gradle consults
(Reference: [Maven
Metadata](https://maven.apache.org/repositories/metadata.html),
[Metadata](https://maven.apache.org/ref/3.9.8/maven-repository-metadata/repository-metadata.html)
[Metadata](https://maven.apache.org/ref/3.9.8/maven-repository-metadata/repository-metadata.html))
Maven metadata files are called `maven-metadata.xml`.
@ -139,7 +139,7 @@ G level metadata is currently unsupported. It's only used for Maven
plugins, which Gradle presumably doesn't use.
A level metadata is used for getting the version list for an artifact.
It's an xml with the following items:
It's an XML file with the following items:
- `<groupId>` - group ID
- `<artifactId>` - artifact ID

View file

@ -1,6 +1,6 @@
# Tree-sitter Grammars
Use [grammar-sources.nix](grammar-sources.nix) to define tree-sitter grammars sources.
Use [grammar-sources.nix](grammar-sources.nix) to define tree-sitter grammar sources.
Tree-sitter grammars follow a common form for compatibility with the [`tree-sitter` CLI](https://tree-sitter.github.io/tree-sitter/cli/index.html).
This uniformity enables consistent packaging through shared tooling.
@ -44,7 +44,7 @@ Each entry is passed to [buildGrammar](../build-grammar.nix), which in turn popu
Attempt to build the new grammar: `nix-build -A tree-sitter-grammars.tree-sitter-latex`.
This will fail due to the invalid hash.
Review the downloaded source then update the source definition with the printed source `hash`.
Review the downloaded source, then update the source definition with the printed source `hash`.
## Pinning Grammar Sources

View file

@ -6,13 +6,13 @@ This file was generated with pkgs/misc/documentation-highlighter/update.sh
**This package contains only the CDN build assets of highlight.js.**
This may be what you want if you'd like to install the pre-built distributable highlight.js client-side assets via NPM. If you're wanting to use highlight.js mainly on the server-side you likely want the [highlight.js][1] package instead.
This may be what you want if you'd like to install the pre-built distributable highlight.js client-side assets via NPM. If you want to use highlight.js mainly on the server-side, you likely want the [highlight.js][1] package instead.
To access these files via CDN:<br>
https://cdn.jsdelivr.net/gh/highlightjs/cdn-release@latest/build/
**If you just want a single .js file with the common languages built-in:
<https://cdn.jsdelivr.net/gh/highlightjs/cdn-release@latest/build/highlight.min.js>**
**If you just want a single .js file with the common languages built-in:**
<https://cdn.jsdelivr.net/gh/highlightjs/cdn-release@latest/build/highlight.min.js>
---

View file

@ -3,12 +3,12 @@
## buildHomeAssistantComponent
Custom components should be packaged using the
`buildHomeAssistantComponent` function, that is provided at top-level.
`buildHomeAssistantComponent` function that is provided at top-level.
It builds upon `buildPythonPackage` but uses a custom install and check
phase.
Python runtime dependencies can be directly consumed as unqualified
function arguments. Pass them into `dependencies`, for them to
function arguments. Pass them into `dependencies` for them to
be available to Home Assistant.
Out-of-tree components need to use Python packages from

View file

@ -33,7 +33,7 @@ For example, when upgrading from 1.4 -> 1.5
## Remove release
Kanidm versions are supported for 30 days after the release of new versions. Following the example above, 1.5.x superseding 1.4.x in 30 days, do the following near the end of the 30-day window
Kanidm versions are supported for 30 days after the release of new versions. Following the example above, 1.5.x superseding 1.4.x in 30 days, do the following near the end of the 30-day window:
1. Update `pkgs/by-name/ka/kanidm/1_4.nix` by adding `unsupported = true;`
1. Update `pkgs/top-level/release.nix` and add `kanidm_1_4-1.4.6` and `kanidmWithSecretProvisioning_1_4-1.4.6` to `permittedInsecurePackages`

View file

@ -19,7 +19,7 @@ file with the id of the app:
The app must be available in the official
[Nextcloud app store](https://apps.nextcloud.com).
https://apps.nextcloud.com. The id corresponds to the last part in the app url,
The id corresponds to the last part in the app url,
for example `breezedark` for the app with the url
`https://apps.nextcloud.com/apps/breezedark`.
@ -46,7 +46,7 @@ Using it together with the Nextcloud module could look like this:
hostName = "localhost";
config.adminpassFile = "${pkgs.writeText "adminpass" "hunter2"}";
extraApps = with pkgs.nextcloud31Packages.apps; {
inherit mail calendar contact;
inherit mail calendar contacts;
};
extraAppsEnable = true;
};

View file

@ -1,4 +1,4 @@
To update discourse, do the following:
To update Discourse, do the following:
1. Switch to and work from the `master` branch and the directory this
file is in.
@ -16,7 +16,7 @@ To update discourse, do the following:
step 4 and 5 again.
7. Run `./update.py update-plugins`.
8. Run `nix build -L -f ../../../../ discourseAllPlugins.tests` to
make sure the plugins build and discourse starts with them. Also
make sure the plugins build and Discourse starts with them. Also
test manually, if possible.
9. If the update works, commit it. If not, apply necessary fixes and
commit. No manual fixes that would be overwritten by the

View file

@ -8,13 +8,13 @@ file with the codename of the package:
- `wordpress-plugins.json` for plugins
The codename is the last part in the url of the plugin or theme page, for
example `cookie-notice` in in the url
example `cookie-notice` in the url
`https://wordpress.org/plugins/cookie-notice/` or `twentytwenty` in
`https://wordpress.org/themes/twentytwenty/`.
In case of language packages, the name consists of country and language codes.
For example `de_DE` for country code `de` (Germany) and language `DE` (German).
For available translations and language codes see [upstream translation repository](https://translate.wordpress.org).
For available translations and language codes see the [upstream translation repository](https://translate.wordpress.org).
To regenerate the nixpkgs wordpressPackages set, run:

View file

@ -16,8 +16,8 @@ There are effectively two steps when updating the standard environment:
1. Update the definition of llvmPackages in `all-packages.nix` for Darwin to match the value of
llvmPackages.latest in `all-packages.nix`. Timing-wise, this is done currently using the spring
release of LLVM and once llvmPackages.latest has been updated to match. If the LLVM project
has announced a release schedule of patch updates, wait until those are in nixpkgs. Otherwise,
release of LLVM and once `llvmPackages.latest` has been updated to match. If the LLVM project
has announced a release schedule of patch updates, wait until those are in Nixpkgs. Otherwise,
the LLVM updates will have to go through staging instead of being merged into master; and
2. Fix the resulting breakage. Most things break due to additional warnings being turned into
errors or additional strictness applied by LLVM. Fixes may come in the form of disabling those

View file

@ -21,7 +21,7 @@ And to build all important NixOS tests, run:
nix-build nixVersions.nix_$version.tests
```
Be sure to also update the `nix-fallback-paths` whenever you do a patch release for `nixVersions.stable`
Be sure to also update the `nix-fallback-paths` whenever you do a patch release for `nixVersions.stable`.
```
# Replace $version with the actual Nix version
@ -30,7 +30,7 @@ curl https://releases.nixos.org/nix/nix-$version/fallback-paths.nix > nixos/modu
## Major Version Bumps
If you're updating `nixVersions.stable`, follow all the steps mentioned above, but use the **staging** branch for your pull request (or **staging-next** after coordinating with the people in matrix `#staging:nixos.org`)
If you're updating `nixVersions.stable`, follow all the steps mentioned above, but use the **staging** branch for your pull request (or **staging-next** after coordinating with the people in Matrix `#staging:nixos.org`).
This is necessary because, at the end of the staging-next cycle, the NixOS tests are built through the [staging-next-small](https://hydra.nixos.org/jobset/nixos/staging-next-small) jobset.
Especially NixOS installer tests are important to look at here.

View file

@ -41,7 +41,7 @@ Finally, replace `tlpdb.nix` with the generated file. Note that if the
`00texlive.config` package), TeX Live packages will not evaluate.
The test `pkgs.tests.texlive.tlpdbNix` verifies that the file `tlpdb.nix`
in Nixpkgs matches the one that generated from `texlive.tlpdb.xz`.
in Nixpkgs matches the one generated from `texlive.tlpdb.xz`.
### Build packages locally and generate fix hashes
@ -91,7 +91,7 @@ license lists reported by the test into `default.nix`.
### Running the testsuite
There are a some other useful tests that haven't been mentioned before. Build them with
There are some other useful tests that haven't been mentioned before. Build them with
```
nix-build ../../../../.. -A tests.texlive --no-out-link
```
@ -113,11 +113,11 @@ Most `tlType == "bin"` containers consist of links to scripts distributed in
`$TEXMFDIST/scripts` with a number of patches applied within `default.nix`.
At each upgrade, please run the tests `tests.texlive.shebangs` to verify that
all shebangs have been patched and in case add the relevant interpreters, and
all shebangs have been patched, add the relevant interpreters if necessary, and
use `tests.texlive.binaries` to check if basic execution of all binaries works.
Please review manually all binaries in the `broken` and `ignored` lists of
`tests.texlive.binaries` at least once for major TeX Live release.
`tests.texlive.binaries` at least once for each major TeX Live release.
Since the tests cannot catch all runtime dependencies, you should grep the
`$TEXMFDIST/scripts` folder for common cases, for instance (where `$scripts`