kdwarn: Made Up of Wires Programming blog https://kdwarn.net/programming/blog/feed.xml en-us 60 <![CDATA[Simple Timing in plpgsql]]> ). However, to do some simple timing of operations, you want `clock_timestamp()`. Here's an example in [plpgsql](https://www.postgresql.org/docs/current/plpgsql.html): ```sql do $body$ declare op_start time; op_end time; begin op_start = (select clock_timestamp()); for i in 1..1000000 loop perform (select 1 + 1); end loop; op_end = (select clock_timestamp()); raise info '%', (op_end - op_start); end; $body$ ``` Name it "timing.sql" and run with `psql -f timing.sql`. ]]> Wed, 18 Jun 2025 09:00:00 +0500 https://kdwarn.net/197/blog/programming https://kdwarn.net/197/blog/programming <![CDATA[Share colorized diffs]]> colorized_diff.html`. 4. Share the file. ]]> Fri, 6 Jun 2025 09:00:00 +0500 https://kdwarn.net/195/blog/programming https://kdwarn.net/195/blog/programming <![CDATA[Make curl use stderr properly]]> /dev/null`. But it did, so after the internet failed to deliver me a good answer, I took a peek into `man curl`. What was happening is that curl reports the progress of a request to stderr. Why? I don't know. Stderr is weird. I wanted only actual errors to be emailed to me. So I turned off progress reporting with `--no-progress-meter`. But if I tried a url that returned a 404, that was also getting binned into /dev/null, e.g. `curl --no-progress-meter https://kdwarn.net/api/no-api-here > /dev/null` returned nothing, meaning there was nothing getting reported to stderr. Then I found the `--fail` option, which makes curl report error 22 (and the http error) to stderr, so now I get only the error reported to stderr with `curl --fail --no-progress-meter https://kdwarn.net/api/no-api-here > /dev/null`. I tested binning stderr to make sure I wasn't insane (because the curl documentation on this isn't very clear), and in fact `curl --fail --no-progress-meter https://kdwarn.net/api/no-api-here 2> /dev/null` prints no output. ]]> Fri, 30 May 2025 09:00:00 +0500 https://kdwarn.net/188/blog/programming https://kdwarn.net/188/blog/programming <![CDATA[Postgres Domain v. Constraint]]>
  • Since they are types, they can be defined once and used in multiple places. You cannot do this with constraints. If you want to use the same constraint on multiple columns, you have to write out the exact same text on each of them.
  • When a value violates the constraint on a domain - at least using Postgres's COPY function - it tells you the specific value (and only that value) that violated it. If a constraint on a regular type is violated, the error is less specific and prints out the entire row (or part of it) where the violation occurred. If there are many columns in the row, it makes it difficult to identify which value caused the issue.
  • Domains can be redefined. In my work, I first defined a domain with a constraint that was supposed to match the data. If there was a violation, I redefined the domain with a less restrictive constraint until there was no violation. My work in identifying issues is thus shown in the code. You cannot do this with constraints, at least not in the same way. Constraints don't replace each other; you'd have to erase or comment out a more restrictive one to get to a less restrictive one. ]]> Wed, 28 May 2025 09:00:00 +0500 https://kdwarn.net/189/blog/programming https://kdwarn.net/189/blog/programming <![CDATA[Installing a Postgres Version Not in Your (Debian/Debian Derivative) Distro's Packages]]> /etc/apt/sources.list.d/pgdg.list" # Update the package lists: sudo apt update # Install the latest version of PostgreSQL: # If you want a specific version, use 'postgresql-17' or similar instead of 'postgresql' sudo apt -y install postgresql # (I used postgresql-17 here) ``` I had not previously been aware of the file at /etc/os-release. Sourcing the variables there with `. /etc/os-release` is something I'll have to remember. What I did was use `$(lsb_release -cs)` instead, but I think only because I originally missed that line somehow. That's all well and good, but what if you found those instructions after following other, inferior instructions elsewhere and you have an aborted installation that still has lingering configuration files ... somewhere? Here's where the Debian tools come in handy. One of the main issues I had was that the port was set to the wrong number (totally my fault, but regardless) and I couldn't figure out where it was. A combination of `pg_ctlcluster` (and specifically `pg_ctlcluster 17 main status` in my case), `pg_dropcluster`, and `pg_createcluster` to the rescue. Check the `man` pages for more info, but it's pretty straightforward. If you're like me and have multiple versions of Postgres hanging around, don't forget to specify the port with the `-p` flag. NOTE: If you use Ansible, a lot of this work is available as a role in the repository for this website [here](https://codeberg.org/kdwarn/kdwarn.net/src/branch/main/ansible/roles/postgres).]]> Tue, 15 Apr 2025 09:00:00 +0500 https://kdwarn.net/179/blog/programming https://kdwarn.net/179/blog/programming <![CDATA[rsync with --update]]> Sat, 8 Feb 2025 09:00:00 +0500 https://kdwarn.net/192/blog/programming https://kdwarn.net/192/blog/programming <![CDATA[The Joy of Colocating Tasks and Notes in Plain Text]]> Sat, 21 Dec 2024 09:00:00 +0500 https://kdwarn.net/183/blog/programming https://kdwarn.net/183/blog/programming <![CDATA[at]]> ]]> Tue, 17 Dec 2024 09:00:00 +0500 https://kdwarn.net/152/blog/programming https://kdwarn.net/152/blog/programming <![CDATA[jujutsu]]> ` to write it on the command line. Push to remote, possibly. Then run `jj new` to start a new revision. I don't often use the `-m ` option with `jj new`, as I'm just trying to finish up the one I've been working on and leave things in a fresh state, but you can do that. (There's also one command that will replace `describe` and then `new`: `jj commit`, or `jj ci` for short.) * Aw fuck I forgot something: * if I've already started a new revision with `jj new`, just make whatever changes are necessary and then run `jj squash`. This will push the changes in the working copy into the previous revision, and the working copy will be empty. If you already added a description, an editor will pop up allow you to edit the commit, very much like in rebasing in `git`. * if not, run `jj new` and then `jj squash`. If you just run `jj squash` without starting a new revision, you'll be pushing all your changes both now and what you previously did into the the revision before the one you're attempting to add to. * if you already pushed to a remote, you can do it again, just specify the revision: `jj git push -r --remote `. There's no need (or option) for `--force`. Just push it. * it's also easy to do this with only some of the changes. I'll add that later. Pushing to remote: If finished with a revision and want to push it somewhere, don't start a new one (because you can just work on the working copy without having to specify a revision). Update the bookmark with `jj bookmark set main` to move the main bookmark/branch to the working copy. Then do `jj git push --remote ` to push it there. Then a new revision, to start further work, can be started with `jj new`. Various things: * `@` is the working copy and `@-` is used for the revision before the working copy. You can pass in the revision on most commands - `-r `. So `-r @-` is the one before the working. I'm not sure how far out it goes, but tacking on additional `-` will go one further. * `jj show` will show commit description (the full one, not just the subject like `jj log` does) and revision changes. Handy as `jj show @-` to see previous one from working copy. * `jj undo` is pretty great. I fucked up some things and it made them go away. * `jj abandon` is both useful and good naming. Wrote some code that's actually not worth saving? `jj abandon`. * to use jujutsu out of the gate with a new Rust project (rather than "colocate" it with git), pass `--vcs=none` to `cargo new` and then run `jj git init` in the project's directory. * the "builtin_log_compact_full_description" template is the one that feels most like what I expect from `git log`. So I've added an alias for it, to "v", which means it can be called with `jj v`. The new part of my ~/.config/jj/config.toml file looks like this: ```toml [aliases] v = ["log", "-T", "builtin_log_compact_full_description"] # v for verbose ``` * Start a branch awhile ago and then just kind of forget about it? And then you're like 30 commits from where you diverged but you want to pick up the old branch again? I'd have to probably read several blog posts and forum threads for git, but for jujutsu it took me just `jj help` and a couple minutes to figure out that the answer is just `jj rebase -r [revision] -d @` and everything seems ... like I wanted it to be? (`-b` or `-s` may be a better choice than `-r`. `jj help rebase` provides clear explanation and graphs to make the decision easy.) ]]> Sun, 10 Nov 2024 09:00:00 +0500 https://kdwarn.net/184/blog/programming https://kdwarn.net/184/blog/programming <![CDATA[It's Not REST]]> Wed, 28 Aug 2024 09:00:00 +0500 https://kdwarn.net/182/blog/programming https://kdwarn.net/182/blog/programming <![CDATA[Code for Yourself]]> Sun, 25 Aug 2024 09:00:00 +0500 https://kdwarn.net/159/blog/programming https://kdwarn.net/159/blog/programming <![CDATA[Don't Use Serial]]> Mon, 12 Aug 2024 09:00:00 +0500 https://kdwarn.net/163/blog/programming https://kdwarn.net/163/blog/programming <![CDATA[Helix * and gr]]> Fri, 14 Jun 2024 09:00:00 +0500 https://kdwarn.net/175/blog/programming https://kdwarn.net/175/blog/programming <![CDATA[Hosting Rust crate documentation with Nginx]]> ; it contains a library and binary program. Let's start locally. Clone that repo or use one of your own and then use `cargo`, that wonderful tool, to create the html documentation: `cargo doc --no-deps --open`. `--no-deps` excludes documentation for any dependencies; leave it off if you want them. `--open` opens the documentation in your browser. (For more information on creating Rust documentation, see the docs on [cargo doc](https://doc.rust-lang.org/cargo/commands/cargo-doc.html) and [rustdoc](https://doc.rust-lang.org/rustdoc/index.html).) `cargo` defaults to opening the library documentation when there is both a library and a binary, but you can easily get to the binary docs from the sidebar. Let's examine the URL we'll need to replicate. For this project, for me, the address is . The binary docs are at . What follows target/doc/ is the important part. (That's the default location, but [it's configurable](https://doc.rust-lang.org/cargo/commands/cargo-doc.html#output-options).) There is no page above those; going to will give you the directory index. However, as you can see by visiting that directory, there are all kinds of things there that need to be included. So, we'll make the contents of that entire directory accessible, and, though not necessary, redirect from the bare /doc path to the library's documentation. Now go to the server (Debian for me). I cloned the repo to /opt, `cd`'d into /opt/traffic-counts, and ran `cargo doc --no-deps`. The library's index.html is located at /opt/traffic-counts/target/doc/traffic_counts/index.html. For the binary documentation, it's ...doc/import/index.html. And finally, here are the nginx header directives, in a jinja2 template, that allow you to then serve these static files. `{{ docs_url }}` is the root path you want them hosted at, e.g. use "/traffic-counts/docs" for . Don't forget to reload nginx after you add this to your configuration. ```jinja2 # Traffic Counts documentation # Make everything at target/doc available. location {{ docs_url }} { alias /opt/traffic-counts/target/doc; } # There is no top-level index, so redirect it to the library crate # with the trailing slash location = {{ docs_url }}/ { return 301 $scheme://$http_host{{docs_url }}/traffic_counts/index.html; } # and without the trailing slash location = {{ docs_url }} { return 301 $scheme://$http_host{{docs_url }}/traffic_counts/index.html; } ``` In the return (redirect) statements, I use `$scheme://$http_host` so that it'll work in both production and development environments. Particularly useful is `$http_host`, which will include the port with a localhost address.]]> Sun, 21 Apr 2024 09:00:00 +0500 https://kdwarn.net/177/blog/programming https://kdwarn.net/177/blog/programming <![CDATA[Diffing]]> `. 2. Use the `-N` (`--intent-to-add`) flag in `git add`, which "record[s] only the fact that the path will be added later" (from `git add --help`). As the help continues, "This is useful for, among other things, showing the unstaged content of such files with `git diff` and committing them with `git commit -a`." Not sure which way I'll settle on, but I'm glad I finally looked into it. I generally do the first version, but only because I didn't know any other way. And before I end up doing that, I usually think there is some simple flag I'm missing with `git diff` that'll do what I want, before realizing - once again - that there's not. ]]> Mon, 15 Apr 2024 09:00:00 +0500 https://kdwarn.net/161/blog/programming https://kdwarn.net/161/blog/programming <![CDATA[Associated Functions]]> Mon, 8 Apr 2024 09:00:00 +0500 https://kdwarn.net/151/blog/programming https://kdwarn.net/151/blog/programming <![CDATA[git show]]> Mon, 1 Apr 2024 09:00:00 +0500 https://kdwarn.net/174/blog/programming https://kdwarn.net/174/blog/programming <![CDATA[just]]> Wed, 14 Feb 2024 09:00:00 +0500 https://kdwarn.net/185/blog/programming https://kdwarn.net/185/blog/programming <![CDATA[--show-output with cargo test]]> Fri, 2 Feb 2024 09:00:00 +0500 https://kdwarn.net/196/blog/programming https://kdwarn.net/196/blog/programming <![CDATA[Capturing Catchall Test Value]]> Thu, 18 Jan 2024 09:00:00 +0500 https://kdwarn.net/157/blog/programming https://kdwarn.net/157/blog/programming <![CDATA[Science, Engineering, Construction, Gardening?]]> Unfortunately, the most common metaphor for software development is building construction. Bertrand Meyer’s classic work *Object-Oriented Software Construction* uses the term “Software Construction,” and even your humble authors edited the Software Construction column for IEEE Software in the early 2000s. > > But using construction as the guiding metaphor implies the following steps: > 1. An architect draws up blueprints. > 2. Contractors dig the foundation, build the superstructure, wire and plumb, and apply finishing touches. > 3. The tenants move in and live happily ever after, calling building maintenance to fix any problems. > > Well, software doesn’t quite work that way. Rather than construction, software is more like gardening — it is more organic than concrete. You plant many things in a garden according to an initial plan and conditions. Some thrive, others are destined to end up as compost. You may move plantings relative to each other to take advantage of the interplay of light and shadow, wind and rain. Overgrown plants get split or pruned, and colors that clash may get moved to more aesthetically pleasing locations. You pull weeds, and you fertilize plantings that are in need of some extra help. You constantly monitor the health of the garden, and make adjustments (to the soil, the plants, the layout) as needed. > > Business people are comfortable with the metaphor of building construction: it is more scientific than gardening, it’s repeatable, there’s a rigid reporting hierarchy for management, and so on. But we’re not building skyscrapers — we aren’t as constrained by the boundaries of physics and the real world. > > The gardening metaphor is much closer to the realities of software development. Perhaps a certain routine has grown too large, or is trying to accomplish too much — it needs to be split into two. Things that don’t work out as planned need to be weeded or pruned. I don't know that Harvey would disagree with this much; it's an extension of what he said, in a way. ]]> Mon, 8 Jan 2024 09:00:00 +0500 https://kdwarn.net/193/blog/programming https://kdwarn.net/193/blog/programming <![CDATA['static for error types]]> Finally, where possible, your error type should be `'static`. The most immediate benefit of this is that it allows the caller to more easily propagate your error up the call stack without running into lifetime issues. It also enables your error type to be used more easily with type-erased error types, as we’ll see shortly. ]]> Sat, 6 Jan 2024 09:00:00 +0500 https://kdwarn.net/198/blog/programming https://kdwarn.net/198/blog/programming <![CDATA[Variable Number of SQL Clause Predicates]]> , query: Query, ) -> Result>, HttpError> { let context = rqctx.context(); let query_args = query.into_inner(); let mut query: QueryBuilder = QueryBuilder::new( "\ SELECT \ name, year, members, member_type, eligible_to_vote, source, source_url, notes \ FROM unions u \ JOIN union_members um ON u.id = um.union_id\ ", ); // Add WHERE clause if there are any query parameters. let mut predicates = vec![]; if let Some(v) = query_args.member_type { predicates.push((" member_type = ", DbValueTypes::String(v))); } if let Some(v) = query_args.eligible_to_vote { predicates.push((" eligible_to_vote = ", DbValueTypes::Bool(v))); } if !predicates.is_empty() { let mut predicates = predicates.into_iter().peekable(); query.push(" WHERE "); while let Some((text, var)) = predicates.next() { query.push(text); match var { DbValueTypes::Bool(x) => query.push_bind(x), DbValueTypes::I32(x) => query.push_bind(x), DbValueTypes::String(x) => query.push_bind(x), }; if predicates.peek().is_some() { query.push(" AND "); } } } // Order results (by number of union members by default). query.push(" ORDER BY "); let order_clause = match query_args.order_by { Some(UnionMemberOrder::Members) | None => "members DESC", Some(UnionMemberOrder::Union) => "name ASC", Some(UnionMemberOrder::Year) => "year DESC", }; query.push(order_clause); let query = query.build_query_as(); let unions_members = query.fetch_all(&context.pool).await.unwrap(); Ok(HttpResponseOk(unions_members)) } ``` ]]> Thu, 4 Jan 2024 09:00:00 +0500 https://kdwarn.net/208/blog/programming https://kdwarn.net/208/blog/programming <![CDATA[cal]]> Tue, 5 Dec 2023 09:00:00 +0500 https://kdwarn.net/156/blog/programming https://kdwarn.net/156/blog/programming <![CDATA[Instrumentation]]> Wed, 8 Nov 2023 09:00:00 +0500 https://kdwarn.net/180/blog/programming https://kdwarn.net/180/blog/programming <![CDATA[systemctl cat]]> ` will print the path and contents of the unit file for a service. I sometimes know the filename and so it's easy enough to just `cat` the path, but sometimes I don't and I guess several times until I give up and look it up. Just using this by default would probably end up saving some time/make things smoother.]]> Fri, 18 Aug 2023 09:00:00 +0500 https://kdwarn.net/200/blog/programming https://kdwarn.net/200/blog/programming <![CDATA[Broken Pipe]]> Thu, 11 May 2023 09:00:00 +0500 https://kdwarn.net/155/blog/programming https://kdwarn.net/155/blog/programming <![CDATA[How to Learn Rust]]> Sat, 21 Jan 2023 09:00:00 +0500 https://kdwarn.net/178/blog/programming https://kdwarn.net/178/blog/programming <![CDATA[let if]]> { "There are no recently updated indicators."}

    } } else { html! {

    { format!("{:#?}", self.updated_indicators) }

    } }; ``` ]]>
    Thu, 13 Oct 2022 09:00:00 +0500 https://kdwarn.net/186/blog/programming https://kdwarn.net/186/blog/programming
    <![CDATA[Looping with modulus]]> Wed, 27 Jul 2022 09:00:00 +0500 https://kdwarn.net/187/blog/programming https://kdwarn.net/187/blog/programming <![CDATA[Use format! to turn chars to string]]> Thu, 30 Jun 2022 09:00:00 +0500 https://kdwarn.net/204/blog/programming https://kdwarn.net/204/blog/programming <![CDATA[Syncing Dotfiles]]> GitHub Codeberg, which presented a small challenge — how to do the commit and push once I collected all the files into a git repository? Here's the simplified bit of bash for that: ```sh git_status=$(git status) if [[ $git_status != *"nothing to commit"* ]]; then git add "*" && git commit -am "Update" && git push fi ``` If the stdout of running `git status` doesn't contain "nothing to commit", then it adds all files in the repo, commits with the message "Update", and pushes it. That's not a very meaningful commit message — especially not as the *only* message in the history after the initial set up — but I'm not particularly concerned with that and more with having the files always up-to-date and accessible. Another small challenge was with cron. I didn't want to run the script repeatedly all day, but if I just ran it once a day there was a chance my computer wouldn't be on at the time and so the cronjob wouldn't run. [Anacron](https://sourceforge.net/projects/anacron/) to the rescue! Anacron will run jobs on a regular basis like cron, except that it is aware of the last time jobs ran and will run them again if they haven't run within the specified interval. Anacron isn't installed on Linux distos by default (or at least not Debian and its derivatives), but it's a simple `sudo apt install anacron` to install it. By default, anacron's configuration file is location at /etc/anacrontab and it tracks jobs run at /var/spool/anacron. I wanted these to be in my user space, so I created those directories/files under ~/.anacron. Here is the part of the config file (~/.anacron/etc/anacrontab) related to this project: ```sh 1 3 manage_dotfiles ~/coding/dotfiles/manage_dotfiles > /dev/null ``` There are two other pieces to this. The first is including this in my ~.profile file, so that anacron runs on startup: ```sh anacron -t "$HOME/.anacron/etc/anacrontab" -S "$HOME/.anacron/var/spool/anacron" ``` And the second is a regular cronjob that will run anacron every hour (which causes anacron to check if any jobs need to be run, and run them if so): ``` 0 * * * * anacron -t "$HOME/.anacron/etc/anacrontab" -S "$HOME/.anacron/var/spool/anacron" ``` That's pretty much it. [Here](https://codeberg.org/kdwarn/dotfiles/)'s the link to the repo, which includes the full `manage_dotfiles` bash script. ]]> Mon, 9 May 2022 09:00:00 +0500 https://kdwarn.net/199/blog/programming https://kdwarn.net/199/blog/programming <![CDATA[Generic and traits]]> (t: T) {}`. And you can specify not just any type, but a type that implements a specific trait: `fn(t: T) {}`, which can be read as "For any type that implements the FromStr trait." When writing this up in my notes, I also came across returning a type that implements at trait, which makes more sense now, although I just came up with a few questions to dig deeper into the whole subject that I need to look into sometime. ]]> Mon, 18 Apr 2022 09:00:00 +0500 https://kdwarn.net/173/blog/programming https://kdwarn.net/173/blog/programming <![CDATA[fold()]]> Sun, 20 Feb 2022 09:00:00 +0500 https://kdwarn.net/172/blog/programming https://kdwarn.net/172/blog/programming <![CDATA[Client Interface First]]> When you’re trying to design code, writing the client interface first can help guide your design. Write the API of the code so it’s structured in the way you want to call it; then implement the functionality within that structure rather than implementing the functionality and then designing the public API. > > Similar to how we used test-driven development in the project in Chapter 12, we’ll use compiler-driven development here. We’ll write the code that calls the functions we want, and then we’ll look at errors from the compiler to determine what we should change next to get the code to work. ]]> Wed, 19 Jan 2022 09:00:00 +0500 https://kdwarn.net/158/blog/programming https://kdwarn.net/158/blog/programming <![CDATA[Transforming programming]]> Mon, 17 Jan 2022 09:00:00 +0500 https://kdwarn.net/203/blog/programming https://kdwarn.net/203/blog/programming <![CDATA[Refactoring in Rust]]> Sun, 9 Jan 2022 09:00:00 +0500 https://kdwarn.net/191/blog/programming https://kdwarn.net/191/blog/programming <![CDATA[Iterators and collect()]]> )" by Ana Hobden. Both were very helpful. I discovered the `.inspect()` method in the Hobden piece, and it seems like it will really be useful in debugging/figuring out iterator chains. ]]> Sat, 8 Jan 2022 09:00:00 +0500 https://kdwarn.net/181/blog/programming https://kdwarn.net/181/blog/programming <![CDATA[Trait function signatures]]> Self; } impl AppendBar for Vec { fn append_bar(mut self) -> Self { self.push("Bar".to_string()); self } } ``` ]]> Fri, 7 Jan 2022 09:00:00 +0500 https://kdwarn.net/202/blog/programming https://kdwarn.net/202/blog/programming <![CDATA[This Is What I Know about match]]> ` symbol, potentially in a curly brace block, if longer than one line) - Rust will not examine any subsequent arms. Additionally, matches are exhaustive: every possible option must be handled, otherwise the code will not compile. Use "match guards" to further refine what you are matching. This is done by following the pattern with a bool-type expression. See 2nd arm of the longer example below. Here are some syntax options for the tests (the left side): * just provide the value * `x ..= y` - inclusive range from x to y * `x | y` - x or y * `_` - any (this will often be done as the last arm to catch all other possibilities) Here is an example from the Rust book, matching on enum variants: ```rust enum Coin {     Penny,     Nickel,     Dime,     Quarter, } fn value_in_cents(coin: Coin) -> u8 { match coin {     Coin::Penny => 1,     Coin::Nickel => 5,     Coin::Dime => 10,     Coin::Quarter => 25, // if this one, e.g, was not included, the code wouldn't compile     } } ``` This example (from my solution on [Exercism](https://exercism.org/tracks/rust/exercises/rpn-calculator/)) shows a number of these concepts as well as the `matches!` macro: ``` rust pub fn evaluate(inputs: &[CalculatorInput]) -> Option { if inputs.is_empty() { return None; } let mut rpn: Vec = vec![]; for each in inputs { match each { CalculatorInput::Value(x) => { // {} not necessary, but this shows the longer form rpn.push(*x); } // note the lack of comma compared to the shorthand form _ if rpn.len() < 2 => return None, // match guard // the reason for this is because the four other possibilities all require these // temp1 and temp2 vars to be created, otherwise would have just done normal match _ => { let temp2 = rpn.pop().unwrap(); let temp1 = rpn.pop().unwrap(); if matches!(each, CalculatorInput::Add) { // matches! macro rpn.push(temp1 + temp2); } if matches!(each, CalculatorInput::Subtract) { rpn.push(temp1 - temp2); } if matches!(each, CalculatorInput::Multiply) { rpn.push(temp1 * temp2); } if matches!(each, CalculatorInput::Divide) { rpn.push(temp1 / temp2); } } } } if rpn.len() > 1 { return None; } Some(rpn[0]) } ``` You can also assign the result from a match expression to a variable (example from Tim McNamara's *Rust in Action*, Ch. 2): ```rust let needle = 42; let haystack = [1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862]; for item in &haystack { let result = match item { 42 | 132 => "hit!", // 42 or 132 _ => "miss", // anything else }; } ``` There is a shorthand expression when you care about only one of the cases and don't need to do anything for all others: `if let`: ```rust if let Some(3) = some_u8_value {     println!("three"); } ``` (An update from the future: At first, I found the `let` here to be confusing, because nothing was being assigned in the block. What is the `let` doing!? It seems clearer syntax would be `if some_u8_value == Some(3)`. I'm sure there are good reasons this isn't possible. But after a while, it became second nature so I stopped thinking about it.) You can also use `else` with this, when you want to define the behavior to be done instead of just no behavior. Sources: * [https://doc.rust-lang.org/std/macro.matches.html](https://doc.rust-lang.org/std/macro.matches.html) * [https://doc.rust-lang.org/std/keyword.match.html](https://doc.rust-lang.org/std/keyword.match.html) * [https://doc.rust-lang.org/reference/expressions/match-expr.html](https://doc.rust-lang.org/reference/expressions/match-expr.html) * [https://doc.rust-lang.org/reference/expressions/match-expr.html#match-guards](https://doc.rust-lang.org/reference/expressions/match-expr.html#match-guards) * [https://doc.rust-lang.org/stable/book/ch06-00-enums.html](https://doc.rust-lang.org/stable/book/ch06-00-enums.html) * [https://www.rustinaction.com/](https://www.rustinaction.com/) ]]> Thu, 16 Dec 2021 09:00:00 +0500 https://kdwarn.net/201/blog/programming https://kdwarn.net/201/blog/programming <![CDATA[Braces]]> Sat, 2 Oct 2021 09:00:00 +0500 https://kdwarn.net/154/blog/programming https://kdwarn.net/154/blog/programming <![CDATA[The Secret Life of Programs]]> Tue, 21 Sep 2021 09:00:00 +0500 https://kdwarn.net/194/blog/programming https://kdwarn.net/194/blog/programming <![CDATA[Various items]]> Tue, 14 Sep 2021 09:00:00 +0500 https://kdwarn.net/207/blog/programming https://kdwarn.net/207/blog/programming <![CDATA[Background and Foreground]]> Sun, 29 Aug 2021 09:00:00 +0500 https://kdwarn.net/153/blog/programming https://kdwarn.net/153/blog/programming <![CDATA[Creating a Python Virtual Environment in Debian via Ansible]]> Sat, 24 Jul 2021 09:00:00 +0500 https://kdwarn.net/209/blog/programming https://kdwarn.net/209/blog/programming <![CDATA[Host-dependent variables in Ansible]]> Sat, 24 Jul 2021 09:00:00 +0500 https://kdwarn.net/176/blog/programming https://kdwarn.net/176/blog/programming <![CDATA[Various items]]> Mon, 12 Jul 2021 09:00:00 +0500 https://kdwarn.net/206/blog/programming https://kdwarn.net/206/blog/programming <![CDATA[Rebase and Merge]]> Mon, 5 Jul 2021 09:00:00 +0500 https://kdwarn.net/190/blog/programming https://kdwarn.net/190/blog/programming <![CDATA[A Couple Tips on GitHub Actions]]> Mon, 5 Jul 2021 09:00:00 +0500 https://kdwarn.net/150/blog/programming https://kdwarn.net/150/blog/programming <![CDATA[Vagrant, Libvirt, and Ansible]]> Sat, 3 Apr 2021 09:00:00 +0500 https://kdwarn.net/205/blog/programming https://kdwarn.net/205/blog/programming <![CDATA[Flashcards CLI 2.0 Released]]> = v2.1) includes a command to convert to the new format. I still have some plans for improvements, but I somewhat doubt that there will ever be a version 3.0. Try it out if you're looking to memorize some things and like the command line!]]> Wed, 10 Mar 2021 09:00:00 +0500 https://kdwarn.net/170/blog/programming https://kdwarn.net/170/blog/programming <![CDATA[A Zettelkasten with Vim and Bash]]> l :let @y = "[[" . expand("%") . "]] " :1,1y z let @" = @y . @z ``` So, for me, typing `\l` in normal mode will do this, as I'm using the default leader. Then it's just a matter of using `p` to put it where I want, and it will look like this, for instance: ``` [[notes/2021-01-20-0351.md]] Wagtail ``` To make these links further stand out, I created the file ~/.vim/after/syntax/markdown.vim and added the following: ```vim syntax match markdownRef "\v\[\[[-a-z0-9\.\/\:]+\]\]" highlight link markdownRef Type ``` In the colorscheme I use (solarized8), the brackets and the path between them appear in a mustard yellow color, setting it off nicely from the rest of the text. To follow this link in Vim, just use the standard `gf` shortcut when the cursor is on it. The relative path works, because I always start my zk from the zk/ directory. Someday I may want to move the zk out of my Dropbox folder, and if I were to use full paths, all the links would then be broken. ### Exploring the Zk In addition to the index, where high-level notes can be easily followed through to all of the connected notes, I also have a couple of other ways to access notes. These are aided by the use of tags, though tags aren't strictly necessary. I make sure that every note (except reference notes) have at least one tag with them. I put tags on the second line of each file, just below the title, and preface them with an "@" symbol. So, for instance, I have a @django tag and a @bash tag and about 20 others so far (and even an @orphan tag for notes that haven't been connected to anything else yet). I'm trying to limit the amount of tags and keep them relatively broad, so it doesn't become too much of a mess and so I don't have to spend too much time thinking about how to tag something. The tags are highlighted, in a dark lime green color, via the following in the markdown.vim file mentioned above: ```vim syntax match markdownTag "\v\@[a-z0-9]+" highlight link markdownTag Statement ``` I've also created two bash commands that will allow easy searching of the zk from a terminal, whether that be by a tag name or any other text. They are both in my ~/bin folder, and rely on the `$zk` variable in my .bashrc. Here is the first, named `zkgrep`: ```sh #!/bin/bash cd $zk || exit grep -B 1 -A 2 -in --color=always "$1" notes/* doc/* | less -RM +Gg ``` Calling this command followed by a pattern I'm looking for (e.g. `zkgrep @bash`) will use grep to search through all files in the notes/ and doc/ folders for that (case-insensitive) pattern and pipe it to less to display them. It will colorize the searched-for pattern (`--color=always` on the grep side and `-R` on the less side) in the output, include one line above the line where the pattern was found (`-B 1`) and two lines below it (`-A 2`), and precede each line returned with the filename and line number (`-n`). The `-M +Gg` options provide a more verbose prompt in less: current lines displayed (`-M`), plus total lines and % of total (`+Gg`), in order to provide an idea of how long the results are. The reason for getting the prior line and the two lines after the line that the search pattern appears on is for context. This is particularly true when I search for tags: because tags are on the second line of the file and the first line of the file is the title of the note, this returns the title, tags, and the next two lines. I made a shortcut (`gz` for "go zkgrep") in Vim to this command, though it's slightly more limited as it can only search for the one word under the cursor. It works well for tags: ```vim noremap gz :!zkgrep ``` The second is very similar to the first, except rather than include four lines from each file, it outputs full files. Its name is `zkgrepfull`: ```sh #!/bin/bash cd $zk || exit grep -iz --color=always "$1" notes/* doc/* | less -RM +Gg ``` Finally, the following command - `zkrand` - will open a random note from my zk as well as the index file. I use it every other day or so, just to take a peek at some note that I may not have otherwise seen recently. The idea is that doing so can help refresh my memory of ideas I've previously had or solutions or libraries I've used in coding, because maybe I've forgotten about them. Or, perhaps there has been a more recent note I wrote that is related to this random one and I didn't realize that at first, and I can make links between the two. ```sh #!/bin/bash cd $zk || exit vim "$(ls notes/* | shuf -n 1)" index.md ``` That's everything I have, at least so far. I don't expect that I'll make any major changes to this setup, though maybe there will be some refinement. I hope this helps someone with their own zk. If you have any questions or comments, hit me up on twitter Mastodon.]]> Sun, 7 Feb 2021 09:00:00 +0500 https://kdwarn.net/171/blog/programming https://kdwarn.net/171/blog/programming