A lot of the technology is explained in the manual (look for "Appendix 3", written by yours trully), so here I give some supplemental details.
The first thing we did was create a Yacc parser for SQL92 (the lexer was hand-written). At the time, not entirely by accident, I was teaching a semester of Parsing in a private school that represented French universities. This proved handy, as delving into Yacc debugging to resolve shift-reduce conflicts was an everyday task.
Another essential task was creating an optimal execution plan(called an access path), a task that is sufficiently explained in the documentation. I developed the algorithm in Prolog (a fact that made my boss anxious and feeling compelled to pass by my desk every day to verify that I was not gooffing off), tested it, then ported it to C, where it was also transformed from a deterministic algorithm into an approximate one by timing out after some predefined interval and returning keeping the best solution so far. It was a natural successor of two earlier things of mine:
- One was a code generator for discrete numeric problems I had written in Paris, while working on a stage with Jean-Louis Laurière, that created nested loops for all variables involved and distributed constraints as deep as possible
- The other was an iterator that sat on top of the BTrees that were Altera's legacy. It used what Martin Fowler calls a fluent interface (along the lines of new dbiter().on("FOO").from(3).to(1000)).
The only kind of database statistic we used was the size of the tables. There were no index statistics, and the selectivity of the constraints on any indexed column was determined syntactically, as explained in the manual.
We developed all new functionality on Windows, then ported to a crazy number of operating systems. That was usually my field. We used custom Make scripts for distribution builds, but Nikos Mavroyannopoulos introduced Autoconf into the unixODBC project. Because we could not depend on having a (free) C++ compiler on all these platforms, we had to revert to using C: ANSI C, fortunately. However, one part of the architecture was object-oriented - the hard way. The iterator hierarchy used vtables represented as arrays of function pointers.
The code was written in a portable manner, and was adapted to 32- and 64-bit architectures. Only the startup code, and the code for memory-mapped files and threading was completely forked between Windows and POSIX systems.
To achieve the effect of threads on systems that lacked this facility, we had some co-routining code that was really intricate.
Stress-testing the system proved what is now obvious, but was not then: that you can only do more by doing less at the same time. We incorporated funneling which limited artificially the number of concurrent connections. We used TPC-C queries to test the server functionally but also stress-test it with a test rig using all available workstations and a custom RSH to start them all at the same time. To be fair, we implemented (to the best of our ability) the same tests on Oracle and SQL Server (J. Tzikas worked on the porting). Ovrimos performed better on the stock hardware we were using (I don't remember if we ever stress-tested on the DEC Alpha, which must have been the most performant machine ever to enter the office).
I have dug up a text describing the deadlock detection.
I'm sorry to say that, even though Ovrimos is now abandonware and could be released to the public, business ethics had prevented me from safekeeping any code. The only code that has survived is code for the HTTP server and the Scheme interpreter who lived on through TinyScheme.