Installing a Postgres Version Not in Your (Debian/Debian Derivative) Distro's Packages
April 15, 2025
coding, database, sysadmin | permalink
Hello future me, you're welcome.
I found myself wanting to use the latest version of Postgres - 17 - but it was not available in the system packages for both my desktop (running Pop_OS! 22.04) and the server it would also be used on (running Debian 11). Not surprising, especially on the Debian side. However, what was surprising was how uncomplicated it was to install and configure. (I hesitate to say "simple".)
Much of this comes from Postgres itself, along with a couple Debian tools. On the Postgres side, they have excellent instructions at Linux Downloads (Debian). I am copying the manual instructions here for posterity (for whatever reason, the script they provide didn't work for me - I assume it was an issue with permissions/ssh/interactive-ness):
# Import the repository signing key:
# Create the repository configuration file:
# Update the package lists:
# Install the latest version of PostgreSQL:
# If you want a specific version, use 'postgresql-17' or similar instead of 'postgresql'
I had not previously been aware of the file at /etc/os-release. Sourcing the variables there with . /etc/os-release is something I'll have to remember. What I did was use $(lsb_release -cs) instead, but I think only because I originally missed that line somehow.
That's all well and good, but what if you found those instructions after following other, inferior instructions elsewhere and you have an aborted installation that still has lingering configuration files ... somewhere?
Here's where the Debian tools come in handy. One of the main issues I had was that the port was set to the wrong number (totally my fault, but regardless) and I couldn't figure out where it was. A combination of pg_ctlcluster (and specifically pg_ctlcluster 17 main status in my case), pg_dropcluster, and pg_createcluster to the rescue. Check the man pages for more info, but it's pretty straightforward. If you're like me and have multiple versions of Postgres hanging around, don't forget to specify the port with the -p flag.
NOTE: If you use Ansible, a lot of this work is available as a role in the repository for this website here.
Hosting Rust crate documentation with Nginx
April 21, 2024
coding, rust, docs, nginx | permalink
I didn't come across a guide on how to do this with an online search (but hey, online search is a dumpster fire like most things on the internet nowadays), so thought I'd write up something quick to help others looking to host their own crate documentation. It's quite easy if you're already familiar with nginx. Here I'll be using the work project I did this for as an example. The source is available at https://github.com/dvrpc/traffic-counts; it contains a library and binary program.
Let's start locally. Clone that repo or use one of your own and then use cargo, that wonderful tool, to create the html documentation: cargo doc --no-deps --open. --no-deps excludes documentation for any dependencies; leave it off if you want them. --open opens the documentation in your browser. (For more information on creating Rust documentation, see the docs on cargo doc and rustdoc.)
cargo defaults to opening the library documentation when there is both a library and a binary, but you can easily get to the binary docs from the sidebar. Let's examine the URL we'll need to replicate. For this project, for me, the address is file:///home/kris/dvrpc/traffic-counts/traffic-counts/target/doc/traffic_counts/index.html. The binary docs are at file:///home/kris/dvrpc/traffic-counts/traffic-counts/target/doc/import/index.html. What follows target/doc/ is the important part. (That's the default location, but it's configurable.)
There is no page above those; going to file:///home/kris/dvrpc/traffic-counts/traffic-counts/target/doc/ will give you the directory index. However, as you can see by visiting that directory, there are all kinds of things there that need to be included. So, we'll make the contents of that entire directory accessible, and, though not necessary, redirect from the bare /doc path to the library's documentation.
Now go to the server (Debian for me). I cloned the repo to /opt, cd'd into /opt/traffic-counts, and ran cargo doc --no-deps. The library's index.html is located at /opt/traffic-counts/target/doc/traffic_counts/index.html. For the binary documentation, it's ...doc/import/index.html.
And finally, here are the nginx header directives, in a jinja2 template, that allow you to then serve these static files. {{ docs_url }} is the root path you want them hosted at, e.g. use "/traffic-counts/docs" for http://example.com/traffic-counts/docs. Don't forget to reload nginx after you add this to your configuration.
# Traffic Counts documentation
# Make everything at target/doc available.
location {{ docs_url }} {
alias /opt/traffic-counts/target/doc;
}
# There is no top-level index, so redirect it to the library crate
# with the trailing slash
location = {{ docs_url }}/ {
return 301 $scheme://$http_host{{docs_url }}/traffic_counts/index.html;
}
# and without the trailing slash
location = {{ docs_url }} {
return 301 $scheme://$http_host{{docs_url }}/traffic_counts/index.html;
}
In the return (redirect) statements, I use $scheme://$http_host so that it'll work in both production and development environments. Particularly useful is $http_host, which will include the port with a localhost address.
How to Learn Rust
January 21, 2023
It's been somewhere around two years since I first began to learn and love Rust. In that time, I've read a lot of things about the programming language. Here are a few resources I recommend to someone who wants to learn it as well.
Read this first: How Not to Learn Rust
Rust has its own list of learning resources, and I think they should be the starting point for anyone:
However, I'd say that Rust by Example was the least useful to me, perhaps because I did the book and Rustlings first. In any case, I recommend doing something like this: read the book and do the Rustlings course at the same time, then work on some small project, check out Rust by Example, then read the book and do Rustlings again. I think I'm on my fourth run of Rustlings and each time I notice that my skills are improved, but there are also still some things that I have to seek help for.
Also, note that there's now an interactive version of the book, from two researchers at Brown University. Maybe you want to try that out instead of the standard version?
Next I'd recommend the 2nd edition of the book Programming Rust. It's absolutely great and I've found that it is also a good reference to turn to first, rather than trying to search online.
Speaking of online help, you cannot go wrong with the Rust users forum. Search it for issues you have. Regularly browse it to see other questions people have and the solutions that are proposed.
There's also a plethora of Discord servers out there - two general Rust ones and it seems like one for every published crate. I personally dislike Discord because it walls off knowledge and is an absolutely horrible platform for trying to discover information (it was, after all, meant for gaming chat), but that's often where you have to go to get help for very specific things. If you can't find help elsewhere, see if the tool/framework/whatever has a Discord server and then start punching words into the search box and try to maintain patience as you wade through 50 different conversations to find what you're looking for.
Finally, something adjacent to this brief discussion is Rust Edu, which "is an organization dedicated to supporting Rust education in the academic environment."
That should get you started!
Addendum: Other Resources
- The Mediocre Programmer's Guide to Rust
- Compiler-driven Development: Making Rust Work for You - talk given by Jacob Pratt at RustConf 2024 that is beginner-friendly
- Flattening Rust's Learning Curve
Addendum: Why learn Rust?
If you are here, you are probably already convinced that you want to learn Rust. But if not, I'm starting to collect some articles that praise Rust and explain why. I'm not sure if I'll keep this section or not - it may go away.
Syncing Dotfiles
May 9, 2022
For a while I kept some configuration files in Dropbox so I could get to them outside my home computer. I wrote a simple bash script that would move them from their normal locations to a directory in my local Dropbox, and then set up a cronjob to run that script every day. That was ok, but they weren't easily accessible publicly. Or at least not in the way many people share their dotfiles, which is to just have a repo for them.
So I decided to move them from Dropbox to GitHub Codeberg, which presented a small challenge — how to do the commit and push once I collected all the files into a git repository? Here's the simplified bit of bash for that:
git_status=
if ; then
&& &&
fi
If the stdout of running git status doesn't contain "nothing to commit", then it adds all files in the repo, commits with the message "Update", and pushes it. That's not a very meaningful commit message — especially not as the only message in the history after the initial set up — but I'm not particularly concerned with that and more with having the files always up-to-date and accessible.
Another small challenge was with cron. I didn't want to run the script repeatedly all day, but if I just ran it once a day there was a chance my computer wouldn't be on at the time and so the cronjob wouldn't run. Anacron to the rescue! Anacron will run jobs on a regular basis like cron, except that it is aware of the last time jobs ran and will run them again if they haven't run within the specified interval. Anacron isn't installed on Linux distos by default (or at least not Debian and its derivatives), but it's a simple sudo apt install anacron to install it. By default, anacron's configuration file is location at /etc/anacrontab and it tracks jobs run at /var/spool/anacron. I wanted these to be in my user space, so I created those directories/files under ~/.anacron. Here is the part of the config file (~/.anacron/etc/anacrontab) related to this project:
There are two other pieces to this. The first is including this in my ~.profile file, so that anacron runs on startup:
And the second is a regular cronjob that will run anacron every hour (which causes anacron to check if any jobs need to be run, and run them if so):
0 * * * * anacron -t "$HOME/.anacron/etc/anacrontab" -S "$HOME/.anacron/var/spool/anacron"
That's pretty much it. Here's the link to the repo, which includes the full manage_dotfiles bash script.
This Is What I Know about match
December 16, 2021
I've been learning Rust for some time now, and really enjoying it - the static typing, the helpful compiler, the tooling, the documentation, just about everything. I think I'm in the neophyte stage. Anyway, this is my attempt to describe the match keyword, mostly (but certainly not entirely) in my own words.
match is the keyword for pattern matching - like case in Bash and other languages, though a little more powerful. Like case, it is similar to a series of if/else expressions, where you go down through them to control the flow of code. However, if and else if expressions must evaluate to a boolean value. The match expression can evaluate to any type. Each possibility is an "arm" that has a pattern that is evaluated. All arms need to return something of the same type. The first pattern that is true has the code associated with it run (which follows a => symbol, potentially in a curly brace block, if longer than one line) - Rust will not examine any subsequent arms. Additionally, matches are exhaustive: every possible option must be handled, otherwise the code will not compile.
Use "match guards" to further refine what you are matching. This is done by following the pattern with a bool-type expression. See 2nd arm of the longer example below.
Here are some syntax options for the tests (the left side):
- just provide the value
x ..= y- inclusive range from x to yx | y- x or y_- any (this will often be done as the last arm to catch all other possibilities)
Here is an example from the Rust book, matching on enum variants:
This example (from my solution on Exercism) shows a number of these concepts as well as the matches! macro:
You can also assign the result from a match expression to a variable (example from Tim McNamara's Rust in Action, Ch. 2):
let needle = 42;
let haystack = ;
for item in &haystack
There is a shorthand expression when you care about only one of the cases and don't need to do anything for all others: if let:
if let Some = some_u8_value
(An update from the future: At first, I found the let here to be confusing, because nothing was being assigned in the block. What is the let doing!? It seems clearer syntax would be if some_u8_value == Some(3). I'm sure there are good reasons this isn't possible. But after a while, it became second nature so I stopped thinking about it.)
You can also use else with this, when you want to define the behavior to be done instead of just no behavior.
Sources:
- https://doc.rust-lang.org/std/macro.matches.html
- https://doc.rust-lang.org/std/keyword.match.html
- https://doc.rust-lang.org/reference/expressions/match-expr.html
- https://doc.rust-lang.org/reference/expressions/match-expr.html#match-guards
- https://doc.rust-lang.org/stable/book/ch06-00-enums.html
- https://www.rustinaction.com/
Creating a Python Virtual Environment in Debian via Ansible
July 24, 2021
coding, python, ansible, linux | permalink
Debian is my preferred Linux distribution, particularly for servers. Although it is the base from which so many other distros are derived, it seems that it gets short shrift by a lot of packages. It took me a little while to figure out how to create a Python virtual environment on it with Ansible, so here is a quick example in the hopes that it may help others who hit this particular hurdle.
Two system installations are necessary:
- name: Install system packages
apt:
name:
- python3-pip
- python3-venv
# others you need
state: present
I prefer to use built-ins wherever possible, so I use venv rather than packages like virtualenv or pyenv for virtual environments. Here is the task that will create one, using the pip module. It will also install any dependencies listed in a requirements.txt file:
- name: Set up virtual environment and install requirements.
pip:
virtualenv_command: /usr/bin/python3 -m venv
virtualenv: "/project/path/ve" # the directory to install the ve
requirements: "/project/path/requirements.txt"
You would think that would be all it takes, but for some reason, Ansible still appears to want to use Python 2 with Debian, so you need to tell it again that you want to use Python 3, this time at the inventory level. Declare the executable explicitly in the inventory file with the ansible_python_interpreter variable, e.g.:
all:
hosts:
my_host:
ansible_host: # domain or ip
ansible_private_key_file: # /path/to/key
ansible_python_interpreter: /usr/bin/python3
Voilà, you should have a Python 3 virtual environment using the built-in venv module.
A Couple Tips on GitHub Actions
July 5, 2021
I work primarily on a Linux machine, and for a while now I've wanted to set up the GitHub actions for my flashcards CLI (now at https://codeberg.org/kdwarn/flashcards) so that the tests can be run on Mac and Windows, because I honestly have no idea if the program will run correctly on those operating systems. The CLI is written in Python and so the main workings of it shouldn't be an issue; I was more concerned with storing and accessing the sqlite database that underpins it. (And in fact was 95% sure that how I entered its path would at least cause a failure on Windows.) Today I finally got around to doing that, although I didn't find the process exactly straightforward. Perhaps that's because I was skimming through the documentation too quickly and trying to find one particular thing, but in any case, here is how I was able to set it up.
I already had a "workflow" set up that ran tests on the Linux virtual environment (Ubuntu) that GH offers, and so I needed to update that. But before I changed that, I wanted to see if I could manually run the workflow, so I didn't have to push a new commit just to have it run. So I changed the code from:
# this is just a snippet of the larger file
on:
push:
branches:
pull_request:
branches:
to:
# this is just a snippet of the larger file
on:
push:
branches:
pull_request:
branches:
workflow_dispatch:
It turned out that was correct, although it wasn't immediate obvious that it was, because when I returned to the main Actions page, I didn't see the "This workflow has a workflow_dispatch event trigger" text with the "Run workflow" next to it, like I should have. I think this is because I did it on a non-main branch and did not yet submit a pull request, though I'm actually not that sure. In any case: keep an eye out, and possibly submit a pull request so that it shows up.
The other thing to point out is that just submitting a pull request, because of that pull_request: section above being in the workflow, triggers the workflow. I wasn't patient enough and left the page before GH could initiate the workflow and show me that it was running. Had I waited just a couple seconds after submitting the pull request, I would have seen this (as I learned subsequently on another). So I didn't even really need to set up the manual running of the workflow - just making a commit and pull request would have triggered it, without needing to commit on main or make a more substantial commit to the actual code of the project. I definitely should have realized this, as I've seen it when I've submitted pull requests to other projects, not to mention that it's specified right there in the workflow configuration. But hey, you sometimes you have to give yourself extra unnecessary busywork in order to realize something in a new context. And plus now I at least know how to set up a workflow so it can be run manually.
Finally, to the whole point of what I was doing. Setting up the additional OS environments was pretty simple. From:
# again, just a portion of the file
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
to:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
python-version: # also added 3.9 here
Vagrant, Libvirt, and Ansible
April 3, 2021
coding, ansible, virtualization, linux | permalink
I recently started to learn how to use Ansible, from Jeff Geerling's book Ansible for DevOps, which so far I highly recommend. A bit of a pain point has been using virtual machines created by vagrant as if they were remote servers — without using Ansible as a provisioner and just running Ansible commands like normal. I think I now have a good workflow that I thought might help others in this situation.
First, a couple notes on my setup: my OS is Debian Testing (which I just upgraded to last week from Buster, which was surprisingly simple and painless to do) and I'm using libvirt as the provider, but this should probably be applicable to other configurations. I'm not going to cover installation of these or vagrant or Ansible, though perhaps someday I'll edit that in.
-
Create the virtual machine with vagrant. If you haven't yet added the box you want to use, add it with
vagrant add box [user]/[box]. Let's say it's CentOS7, since that's what Geerling tends to use:vagrant add box generic/centos7. (I haven't been using the boxes he made since they use VirtualBox rather than libvirt.) Then initialize it:vagrant init generic/centos7. This creates the configuration file, Vagrantfile, in the current directory. If you'll be using some kind of web server on the virtual machine, open that file up and uncomment the lineconfig.vm.network "forwarded_port", guest: 80, host: 8080(putting in whatever host port you'll use in your browser to connect to it). -
Start up the virtual machine:
vagrant up. -
Get the ip address of the vm:
vagrant ssh-config. Either use it directly or put it in an Ansible "inventory" file. -
Add the ssh key of the vm to your SSH authentication agent. For libvirt vms, this is located at .vagrant/machines/default/libvirt/private_key from the vm directory (where the Vagrantfile is located), so run
ssh-add .vagrant/machines/default/libvirt/private_key. I imagine that if you use a different provider, it would just be a matter of substituting its name for the "libvirt" directory.
That should do it — you should now be able to use ssh or ansible or ansible-playbook to connect to the virtual machine.
Note that if you set up multiple servers on the virtual machine and then run an Ansible command on them, you'll need to confirm the ssh fingerprint the first time. Since Ansible runs in parallel mode by default, you'll repeatedly get asked to confirm it, and it doesn't always seem to work. So I'll use one fork in this case by passing in -f 1 to the command the first time.
Flashcards CLI 2.0 Released
March 10, 2021
coding, python, cli | permalink
A little less than a year ago I was looking for a flashcards command-line interface, mainly so I could help myself memorize various Vim commands. Though I'd previously used AnkiWeb for studying, I wanted a CLI because I spend much of my time in the terminal and I wanted to be able to quickly add a new "card" and then jump back to whatever I was doing.
I primarily program in Python, so I tried out two or three different CLIs that I found on PyPI. One was broken on installation (at least with Python 3), and someone else had already raised an issue on GitHub a few months before with the same problem I had. It looked like a very easy fix so I forked the project and submitted a pull request. I didn't get a response, but I liked the interface so I started to use my version of it, and meanwhile poked around the code more. I found some areas that could be simplified, and then more and more kept appearing. Soon I was gutting most of the object-oriented code in favor of a more structural approach. Between that, switching from unittest to pytest, adding more features, and moving from file-based storage to a database, there's not a whole lot of the code I started with left.
Going from file-based storage to an sqlite database was a breaking change (and what necessitated version 2.0), but I think it was a good decision. It was prompted by sqlite repeatedly popping up lately, but also by my desire to make "reversible" cards, so that either the "question" or "answer" could appear before the other during a study session, rather than always question-then-answer. That latter part would have required some structural change within the files, and I felt like they were already cumbersome to work with. Switching to a database would not only allow me to further simplify the codebase, but also make it easier to continue to evolve features.
I doubt that anyone aside from myself had been using my version of flashcards 1.x, but just in case — and because I needed to do it for myself anyway — v2.0 (but not >= v2.1) includes a command to convert to the new format. I still have some plans for improvements, but I somewhat doubt that there will ever be a version 3.0.
Try it out if you're looking to memorize some things and like the command line!
A Zettelkasten with Vim and Bash
February 7, 2021
coding, docs, vim, bash | permalink
A few years ago I decided to take coding from an on-and-off hobby to a professional endeavor and I've been keeping notes on the various things I learn since that time. This started off with notes on Python, I think in a libreoffice doc, in the form of dated entries. That soon became difficult to find information in, and so I moved to a markdown file, structured by topics (and subtopics, and then subtopics of subtopics, etc.). Meanwhile, I also started similar files on other things - git, contributing to open source, flask, javascript, css, etc. I was also bookmarking blog posts, talks, and other useful items, along with sometimes extensive notes on them. If I took an online course, I would also create a file for taking notes on the content. Sometimes I would piece it back into the topic files, but generally it was kind of a pain and so I didn't. Between the different files and the bookmarks, it was sometimes difficult to find something I swore I had taken notes on. And if I couldn't find it, back to Google or Stack Overflow.
More recently, I switched from using VSCode to Vim as my editor of choice, in part because of frequently needing to ssh into remote systems and write or modify some code on them. I quickly fell in love with its editing style and haven't looked back. I generally prefer to use as much vanilla Vim as possible and have about ten plugins installed, but I still tweak things now and again and I'm interested in how others have their Vim set up and how they use it. It was while following that interest that I stumbled across the concept of the "Zettelkasten" (German for "slip-box") in a blog post about note-taking in Vim.
I've now being using the zettelkasten (zk from here on out) system for a few months and can say that it's made what was previously a mess of notes much more useful, discoverable, and enjoyable to maintain. I started by taking better, more "atomic" notes, and then as I touch upon different subjects, I pull in other parts of my previous spaghetti-code-like notes and link to them or from them as necessary. Overall, I feel like the zk has made it much easier to both explore subjects and remind myself how I did something. I also use it for some non-coding things, and having it all in one place makes my professional and intellectual life more structured and organized. But I'm not here to write another blog post evangelizing the zk system or what to name different types of notes or how to connect them. I think there's probably been enough of that done, and if there were one resource I'd recommend, it would be Sönke Ahrens's How to Take Smart Notes.
What I am here to write about is my particular zk structure and the minimal tooling for adding to, managing, and exploring my zk. As the title of this post suggests, it basically comes down to Vim and Bash, through a handful of functions, commands, and shortcuts.
Structure
Everything is contained within plaintext files, in the following layout:
zk/
index.md
doc/
notes/
refs/
The doc/ folder is a holdover from my old system of long, topic documents. There's still about ten files in it, but I'm slowly working my way through them and breaking them up into smaller pieces. The refs/ contains references. These just have the author, title, date, and, if the reference is available online, a link to it. The filename is in the format author-yyyy.md. The actual content of the reference is stored in Zotero (in case the page ever goes down) or Calibre (if an ebook). The notes/ folder contains all the actual notes.
The index.md file is the main entry point of the zk. In addition to containing links to other notes, the index also includes a list of tags I've used, some notes on how to navigate about, and brief descriptions of the shortcuts and functions I've created.
I have a shortcut in my .bashrc file to the zk directory so that I can cd into it quickly from anywhere, with just cd $zk:
zk="/Dropbox/zk/"
As you can see, I have the zk in my Dropbox folder, so it's always synced and backed up. I also back it up to an external hard drive, under a timestamped directory, every three months.
Saving and Linking Notes
I'm often already in Vim when I want to create a new note, so I use :enew to start a new, unnamed buffer. If I'm not already in Vim, it's a quick Ctrl+Alt+t to open a terminal and then type vim into the prompt. After typing up the note, I use the :Zk command I created to save the file in the notes/ folder, with a timestamp for the filename. (And no title because I may actually tweak the title - the first line of the file - over time.) Here's the function and command for that, which I have in my .vimrc:
function! SaveWithTS()
if !expand('%:t')
let l:filename = escape( strftime("%Y-%m-%d-%H%M"), ' ' )
execute "write " . "~/Dropbox/zk/notes/" . l:filename . ".md"
else
echo "File already saved and named."
endif
endfunction
command! Zk call SaveWithTS()
Note that if the file has already been saved, running the command will just return a comment stating that.
I'm new to Vimscript so if that's not very good code, please let me know.
After the new note is saved, it needs to be linked to some other note. The following Vim shortcut helps with that. It will get the file's relative path, add double brackets around it, yank the first line of the file (its title), and then put that altogether into the unnamed register.
noremap <Leader>l :let @y = "[[" . expand("%") . "]] " <bar> :1,1y z <bar> let @" = @y . @z<CR>
So, for me, typing \l in normal mode will do this, as I'm using the default leader. Then it's just a matter of using p to put it where I want, and it will look like this, for instance:
[[notes/2021-01-20-0351.md]] Wagtail
To make these links further stand out, I created the file ~/.vim/after/syntax/markdown.vim and added the following:
syntax match markdownRef "\v\[\[[-a-z0-9\.\/\:]+\]\]"
highlight link markdownRef Type
In the colorscheme I use (solarized8), the brackets and the path between them appear in a mustard yellow color, setting it off nicely from the rest of the text.
To follow this link in Vim, just use the standard gf shortcut when the cursor is on it. The relative path works, because I always start my zk from the zk/ directory. Someday I may want to move the zk out of my Dropbox folder, and if I were to use full paths, all the links would then be broken.
Exploring the Zk
In addition to the index, where high-level notes can be easily followed through to all of the connected notes, I also have a couple of other ways to access notes. These are aided by the use of tags, though tags aren't strictly necessary. I make sure that every note (except reference notes) have at least one tag with them. I put tags on the second line of each file, just below the title, and preface them with an "@" symbol. So, for instance, I have a @django tag and a @bash tag and about 20 others so far (and even an @orphan tag for notes that haven't been connected to anything else yet). I'm trying to limit the amount of tags and keep them relatively broad, so it doesn't become too much of a mess and so I don't have to spend too much time thinking about how to tag something.
The tags are highlighted, in a dark lime green color, via the following in the markdown.vim file mentioned above:
syntax match markdownTag "\v\@[a-z0-9]+"
highlight link markdownTag Statement
I've also created two bash commands that will allow easy searching of the zk from a terminal, whether that be by a tag name or any other text. They are both in my ~/bin folder, and rely on the $zk variable in my .bashrc. Here is the first, named zkgrep:
#!/bin/bash
||
|
Calling this command followed by a pattern I'm looking for (e.g. zkgrep @bash) will use grep to search through all files in the notes/ and doc/ folders for that (case-insensitive) pattern and pipe it to less to display them. It will colorize the searched-for pattern (--color=always on the grep side and -R on the less side) in the output, include one line above the line where the pattern was found (-B 1) and two lines below it (-A 2), and precede each line returned with the filename and line number (-n). The -M +Gg options provide a more verbose prompt in less: current lines displayed (-M), plus total lines and % of total (+Gg), in order to provide an idea of how long the results are. The reason for getting the prior line and the two lines after the line that the search pattern appears on is for context. This is particularly true when I search for tags: because tags are on the second line of the file and the first line of the file is the title of the note, this returns the title, tags, and the next two lines.
I made a shortcut (gz for "go zkgrep") in Vim to this command, though it's slightly more limited as it can only search for the one word under the cursor. It works well for tags:
noremap gz :!zkgrep <cWORD><CR>
The second is very similar to the first, except rather than include four lines from each file, it outputs full files. Its name is zkgrepfull:
#!/bin/bash
||
|
Finally, the following command - zkrand - will open a random note from my zk as well as the index file. I use it every other day or so, just to take a peek at some note that I may not have otherwise seen recently. The idea is that doing so can help refresh my memory of ideas I've previously had or solutions or libraries I've used in coding, because maybe I've forgotten about them. Or, perhaps there has been a more recent note I wrote that is related to this random one and I didn't realize that at first, and I can make links between the two.
#!/bin/bash
||
That's everything I have, at least so far. I don't expect that I'll make any major changes to this setup, though maybe there will be some refinement. I hope this helps someone with their own zk. If you have any questions or comments, hit me up on twitter Mastodon.