sear.c scrapes search results of popular engines, caches them and creates a simple HTML UI
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Anton Luka Šijanec 3823285b32 unicode queries now work 0.0.19 4 weeks ago
debian unicode queries now work 0.0.19 4 weeks ago
src unicode queries now work 0.0.19 4 weeks ago
test fixed parser, fixed leak, O(log n) storage - tsearch(3) - 0.0.17 1 month ago
.gitignore fixed parser, fixed leak, O(log n) storage - tsearch(3) - 0.0.17 1 month ago
Makefile debhelper not debmake 1 month ago 0.0.18 - removed dh_systemd dependency and added armel builds link 1 month ago


sear.c is used as a lightweight replacement for SearX that proxies and caches search results from the Google web search engine. The main advantages over SearX are speed and simplicity.

instructions for debian and ubuntu systems

First add my software distribution repository into your APT sources list.

apt install sear.c
service sear.c start


  • a POSIX system
  • GNU C library (uses tdestroy(3) if compiled without SC_OLD_STORAGE)
  • GNU compiler collection (it's written in GNU C - it uses nested functions)
  • GNU Make
  • libxml2-dev (for the simple HTML/1.0 client and HTML parser)
  • libmicrohttpd-dev (for serving results - use a reverse proxy, such as nginx, for HTTPS)
  • xxd (for converting HTML pages into C arrays when compiling from source)
  • php-cli for a single line of Makefile (and I talk about bloat)

compiling from source

make prepare


  • run the daemon - it starts listening on HTTP port 7327 (remember it by picturing phone keyboard buttons with letters SEAR (; ) - port can be set with the environment variable SC_PORT
  • optional: create a reverse proxy for HTTPS
  • navigate to http://localhost:7327 and do a couple of searches to see if everything works
  • the horseshoe button redirects directly to the first result without wasting time on the results page. use if you feel lucky. (BP)
  • the painting button performs a search for images. PRIVACY WARNING: images are loaded directly from servers (not from google)
  • program writes all logs to standard error (and to /logs.html if compiled with SC_LOGMEM)
  • setting the h parameter will rewrite links to HTTP from HTTPS
  • setting the l parameter with a number will limit number of displayed links to that number.

prebuilt binaries

apart from the usual debian distribution, there are also prebuilt binaries built for amd64, arm64, i386 and armel, as well as debian packages.

before downloading, check that the build passed, indicated below on the badge:

Build Status


screenshot in chromium 0 screenshot in chromium 2 screenshot in chromium 3 screenshot in chromium 4 screenshot in chromium 5

additional information

  • valgrind reports a memory leak, leak is bigger with every API search query. run make valgrind and you'll see it. I was unable to find the bug, but it just bothers me. I wrote a small bug PoC (test/bug) but I could not replicate the bug (cd tmp/bug; make; make valgrind; less valgrind-out.txt - process exits with no leaks possible). Example output from sear.c valgrind with one request done is included in test/bug/example-valgrind.txt. Such small memory leak is not a problem, since we store all extracted data from the query indefinetley anyways, but it's still pretty dumb to leak memory.
  • memory allocations are not checked for failures. This needs to be done to use fanalyzer
  • __attribute__s such as nonnull are not set in struct members of query types and in functions such as htmlspecialchars but if (!arg) return NULL is done instead, which is poor coding style and fanalyzing can't be done in this case. This needs to be fixed to use fanalyzer.