Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Matthews Lab
Search
Search
Appearance
Log in
Personal tools
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Self improving programs
(section)
Page
Discussion
British English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= Trustless algorithmic speed improvements = Unlike most aspects of software development β algorithmic speed improvements are something that we can objectively measure. A developer can either make a set of routines faster or slower β there is no in-between. Such improvements could be attached to conditional payments in a special general-purpose blockchain and then offered as bounties to anyone who can satisfy those conditions. The cool thing is this could all happen automagically β you could even have tools that automatically issued algorithmic speed improvement bounties for programs based on detecting high run times (IDEs, compilers, etc.) Imagine a network of self-improving programs that paid for their own electricity by improving the run-time of other software. There is potential for humans to improve the process and for humans to design programs to do it for them, therefore an autonomous network of self-improving programs is possible, but how on Earth would this happen in reality? '''Some weird science fiction shit? Not exactly.''' The idea is that you would generate a list of test inputs and outputs that offered full branch coverage and record the run time against an algorithm. For a person to be able to claim a bounty they would have to create a new algorithm that satisfied every test case with full branch coverage of the new algorithm and a lower medium execution time. To prevent solutions that output every test case and donβt implement the algorithm you would generate large amounts of random test data and store them in a merkle tree. If a person can satisfy every test case with full branch coverage, within a reasonable algorithm size, then they must be implementing the original algorithm and not outputting tests. The test data in effect becomes like a hash function to use against an algorithm, but its still not perfect. A person could easily add new functionality that did something malicious on the target system so you would need to confine the language of the new solution to avoid malicious additions. This should be easier if its a pure function and itβs implemented in a language that prevents memory corruption like Rust. <span id="the-dispute-process"></span>
Summary:
Please note that all contributions to Matthews Lab may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Matthews Lab:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)