Yes, one can safely pet her even when she looks like she's gonna kill you.
Here's another example for the kind of task that I no longer do myself but rather hand off to the AI, because it does it so well. My
Let's implement a new option --fib_eff that gets a single value or a range, like 0.9 or 0.7-0.9 , in order to set the fiber efficiencies in the simulation. see fiber_efficiency.md how to do that. set the eff after loading the HDF file, dont modify it. Ranges like 0.7-0.9 are to be interpreted as a random value within that range, uniformly distributed. ask if this was unclear, othweise get crackin and iterate. example command: uv run andes-sim flat-field --band H --subslit slitA --wl-min 1600 --wl-max 1602 --fib_eff 0.5-0.95
As expected, Claude had no problem figuring this out. This the resulting commit. The image above is the ouput of that example command.
I did not have to to type anything else than the prompt above. And in doing that, I needed to get clear for myself what I actually wanted, which is always a good thing to do first. The rest is just typing.
The whole project, a CLI to make simulated spectra, was "vibe-coded" this way, me caring little about the actual code that Claude produced, only taking the occasional glance at it. It is therefore probably not great for a long-term maintained code-base, but it already does what I need it to and I have no intention to take this much further.
She has not gotten a Nobel prize herself, local celebrity nanotech professor Maria Strømme. But she as been part of the Nobel prize presentations, so let's apply the moniker Nobel disease loosely. It means the
the embrace of strange or scientifically unsound ideas by some Nobel Prize winners, usually later in life.
What qualifies Strømme is that she recently did the classic crank thing and wrote about the mixture of consciousness and quantum woo. I just leave these links here, in case you are interested
Running a small farm implies a constant stream of things to fix. The image shows an example of when the fruit grinder (to the right) in our juicery broke down. The motor fried itself towards the end of the season and I was hesitant to make the investment of a new one for just two remaining days.
Thus, one of the old motors that one naturally accumulates had to drive it externally with a belt. At first I did not gut the old motor which made it melt a second time thanks to induction -- lesson learned. The receiving axle is not perpendicular to the floor, so I made the angle of the other one adjustable together with the belt tension, by using one of the most important tools of all, a ratchet strap.
I notice it in myself, and in the vibes on the interwebs: this dissonance between on the one hand loving how useful AI is for oneself, and at the same time being annoyed when others use it "on you", or force you to use a bad one.
The examples are aplenty. Slop bug reports, all the companies that try to use chatbots for support questions, and the general feeling that companies push AI a bit too hard down our throats.
But when used right and with taste, it can be fantastic. This is not a contradiction; both the good and the bad exist, and the gap between them will probably only get larger.
A good rule-of-thumb for oneself may be to not make anyone read or interact with AI slop against their will. That means having read and understood AI-generated things to a degree that one feels comfortable standing for it, before sharing. Alternatively being very clear about when this is not the case, so that everyone can choose they level of attention accordingly.
ChatGPT is a mere three years old by now -- it sure feels longer. By now, many seem to either have dismissed LLMs as useless (or evil) or have quickly taken them for granted. But I continue to be amazed and find I get more and more value from Claude et al.
What I probably like the most is that LLMs remove friction. What often slows us down is not the lack of ideas or things to do and try, but the sum of the small hurdles and hoops one has to jump though before getting anywhere.
Well, no more. Config problems with your web-app? Data needs to be converted to or from an obscure custom format? Need to understand why your code does not do what you think it should? Or understand someone else's code quickly? LLMs got you covered. It's like rubberducking but the duck talks back and can solve the entire problem for you!
Yes, they make mistakes and one may need to verify the results. But in may cases this is trivial, or not really necessary, for example when discussing different tools or solutions to a problem. Skimming through the alternatives quickly lets one choose a path forward that is very likely better than it would have been without asking the model.
This is an import aspect: having some taste and quick judgement helps a lot, in other words knowing what you want. If you really do, you probably are able to explain it well enough to an LLM and get it to execute for you. If not, then LLMs can help you too, but one needs to have enough clarity of mind to recognize this as a different mode of operation, where non-leading questions and iterations on the problem can improve one's thinking very much like a discussion with a collegue or expert would have done.
To quote TheZvi:
AI is the best tool ever invented for learning. AI is the best tool ever invented for not learning. You can choose which way you use AI. #1 is available but requires intention.
Last week, on a whim, I gave Claude Code (CC) a task that turned out to be both possible and highly useful, because it removed a common annoyance for some colleagues and myself.
See, there is this C-library called CPL that ESO uses for processing of astronomical data. There also is a Python-wrapper for it, pycpl, that allows to use the library from Python, which is great because it's the language that astronomers and data scientists mostly use nowadays.
However, pycpl does not come with CPL, but instead requires it to be installed beforehand, which often implies compiling it oneself because not all operating systems package the right version, if at all. This is not a big hurdle for developers, but if one wants to share a half-finished pipeline with some prospective users, it easily becomes one. Containers and such can help with this but are always a crutch that I would rather avoid.
Claude to the rescue. The off-handed remark in a telecon, that we should just make a pycpl package that comes with CPL (and its dependency libs) included, took hold with me long enough to ask CC to just do it!
I started by downloading the latest CPL source code, and the three libs it needs. Then I asked CC to initialize the git repository and sort out which files to add and which to ignore.
After pushing to Github, I could switch to the online version of CC where I had free credits to burn, so no harm done if this endeavour would turn into failure. Then I just quickly told it what to do like this, typos and all:
pycpl is a python wrapper for C-lib CPL. But it is packaged without the C-lib, so the overall goal is to upgrade the pycpl package to include the build of CPL and its depencencies (which are also present here). start by looking at the build system of pycpl and how to include the other lobraries to it. then move the libs to appropriate places inside pycpl and try the build.
This session log and this follow-up basically show how CC figured it all out. I only skimmed through it at the time and could not tell you what exactly it did. At some point I realized it needed some of the files that were omitted earlier, so I added those back. In the end, I had a package that installed locally -- a success already.
But what would make this really useful would be a Python "wheel", i.e. a package bundle that is pre-compiled for different platforms and Python version. This way, users would be able to install instantly, without any compiling happening at all. So I naturally asked CC about it, and how to set it up such that GitHub Actions do the compiling. This was the most tedious bit. CC needed many iterations to get this right and compiling on GitHub is not fast. So I let it work in the background over an evening, only checking in occasionally. Claude figured it out in the end! The package installs and works nicely for myself and several colleagues.
Initially, the plan was to also upload to PyPI, because ESO does not actually do that. But I was not able to put claim to the name "pycpl" there, and without that it would not work. Plus I did not want to step on people's toes too much by publishing work that is not my own. Even though, in principle, it should be fine with GPL-licensed code.
Thus, for now, one has to provide an "extra index URL" to install this pycpl package. Use uv, for example like this:
(uv) pip install pycpl --extra-index-url https://ivh.github.io/pycpl/simple/
Or try if it works without having anything prepared:
uv run --with pycpl --extra-index-url https://ivh.github.io/pycpl/simple/ python -c "import cpl;"
Or add it to the header of a script file like this:
uv add --script main.py --index https://ivh.github.io/pycpl/simple/ pycpl
One can also add the index URL to one's pyproject.toml, together with ESO index URL that provides tools like pyesorex and edps, which seem to play nicely with my pycpl instead of ESO's own. For how to do that, see the README in the GitHub repo.
I've been using Claude Code (CC) quite extensively in recent weeks, and apart from a few fails it's been a blast. I want to write about a few of the things I got it to do successfully, but in order to do that I need to be able to share the session logs.
Unfortunately, there is no straight-forward way to do this. The logs
are saved as .jsonl files in
$HOME/.claude/projects/-path-to-your-work-directory/ and when using the
online version of CC, all one to do is use the Open
in CLI button and continue locally, then the .jsonl with the whole session
will show up in the project folder.
These file are a bit unweildy. For example, they contain the whole content of files that CC reads and a buch of distracting metadata. I tried Simon Willison's claude_to_markdown.py but that was not quite what I wanted, which is a static HTML file with embedded JavaScript (JS) to hide the long reads and outputs by default, but make them expandable if needed.
What better way to achieve this than just let CC do it? Very meta, I know. So
I donwloaded the JS from https://claude.ai/code/ (after all, they have solved
the same task there already) and put it into a fresh repo with an example
session log. This is the outline I wrote and added to the repo:
# main goal
a script, python or other, that takes session logs from Claude Code (CC)
and converts them into HTML.
## example data
- a06171f9-5f33-4258-84e1-4dc70e84c6dd.jsonl an example session log. all
input files will have this format.
- Screenshot, two example screenshots of how it looks on CC web.
- CCweb_example.html and CCweb_example_files/ , the saved web page of CC
that should contain useful routines to render the session. Ignore the
left half of the page and session management, only the session part
itself is needed.
## requirements
- the output should be a single html-file, named like the input but ending
.jsonl exchanged to .html
- all javascript should be inlined.
- the script does not need to be self-contained, can e.g. read js files or
templates to make the output.
- the html should look similar to the screenshots, i.e. compact with
unnecessary information skipped, file reads hiden, and long diffs
shortened but expandable.
Then all that was let do do was to point CC to the repository and tell it to get crackin'.
I wasn't a perfect one-shot success, as you can see. But with just a little prodding CC figured it out. I then continued in a new short session to have it sum up the elapsed working time and put that on top of the HTML. Not bad at all, I would say. Feel free to check it out on gitHub.