Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Matthews Lab
Search
Search
Appearance
Log in
Personal tools
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Exploit markets
(section)
Page
Discussion
British English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= An example = # A smart contract to audit a C-based program is written. It includes a test case to see if a file with a specific name has been created under the process’ permissions. It also includes information about the program. # A researcher finds a bug and uses it to write a buffer overflow exploit. The exploit is designed to pass the test case and is written using a special domain specific language for exploitable code for security reasons. # They go through the protocol to claim the reward by committing the exploit and a payment address. # The network runs the exploit against the software within a virtual machine and runs the test case to check for the file. If a valid exploit was found the process should have been hijacked to write the file. The validity of an exploit thus forms part of the consensus rules for the exploit blockchain. This process means that the complexity of a program to audit doesn’t need to be accounted for and only the results of an exploit are worth checking. But one also needs to be careful with obfuscated exploits as these would make it much harder for the vendor to release a patch. One possible solution to the obfuscated exploit issue is to create a special language specifically for describing exploitable code that can then be used to express highly compact exploits. Such a language might not necessary be turning complete to start with but it would be formulated in a way to make writing obfuscated code very difficult. This exploit DSL would be closely tied to how the test-cases work so that many details of an exploit can be omitted to make code more readable. Also note how this process doesn’t depend on trust since the results of all computations are verifiable. Hence there is no need to introduce a trusted third-party. It’s also important to differentiate this from so-called “oracle” schemes which depend on a number of trusted participants who vouch for external state or some subset of trusted operations. This scheme can be done entirely based on regular proof-of-work as every full node can easily check for themselves a transaction’s validity based on the program’s response to exploits. So this whole thing is really just another part of the consensus (which is what makes this a smart contract.)<span id="technical-details"></span> == Technical details == '''1. contract_tx = Hash(binary_file) as H1, URL(binary_file) as URL1''' * Setup the smart contract with the bug bounty information. Valid bugs are expressed in terms of input to the program via user-input, network, and/or files, in accordance to templates or custom rules. E.G. Bug validity is determined based on whether or not a bug can cause privileged actions to happen within the context of the process space. This is testable. '''2. exploit_tx = Hash(exploit_code + payout_pub_key) as H2''' * Commit to a hash of an exploit and payout address that satisfies the bounty conditions against the binary_file. * Commit to a collateral payment that gets given to the miners if the exploit is invalid to avoid attacks. '''3. disclosure_tx = RSA_Encrypt(exploit_code, vendor_pub_key) as E''' * Publicly release an encrypted version of the exploit using the vendor’s public key. This is done with no random padding such that the same plaintext always produces the same ciphertext (YOLO.) '''4. Vendor receives encrypted exploit''' * Vendor decrypts the exploit code with their RSA private key from the blockchain and validates it against their software. '''5. The vendor (actually) pushes a patch to their customers on time.''' Because if they don’t the smart contract punishes them. Can you hear the trolls singing? '''6. confirm_tx (optimal)''' The vendor signals to the researcher that they may disclose the exploit now. '''7. release_tx (the researcher actually gets paid for once)''' The researcher releases the exploit code and claims the reward. Input = Release exploit_code + payout address such that H(exploit_code + payout address) == H2. Output = anything. Sig = Must be signed with the ECDSA key pair used for payout address! '''8. Validate the release TX''' * The network validates the exploit against the binary file in accordance with the rules listed for the contract. Exploits are executed in virtual machines and the validity forms the consensus rules for allocating rewards. * The network looks at whether or not the vulnerability was disclosed to the vendor by doing RSA_Encrypt(exploit_code, vendor_pub_key) == E. If it was it looks at how many blocks have elapsed since disclosure and release to calculate to what extent a penalty or reward is justified. * If an exploit was disclosed too early the smart contract can specify a penalty to the researcher (like reduced reward or even taking away researcher collateral if that is used to submit “solutions.”) One issue with this idea is that if another person finds a vulnerability and discloses it early it is impossible to prove if the original researcher was the one who actually did this so this opens up an attack vector. After thinking about this some more from what I can tell it really only makes sense to have early disclosure penalties for highly paid flaws / serious bugs. * Optional: Reward is initially given as a fraction of the number of outstanding exploits against the software version, N / reward_size but as exploits are released you get to see who was first to claim a bounty so that the rest of the reserved fractions become available to the same researcher. This allows rewards to be split for duplicate bugs. '''9. patch_tx = Hash(new_binary_file) as H3, URL(new_binary_file) as URL2, code_changes as diff''' * Vendor uses an ECDSA private key to sign a new transaction to point their software to an updated version. This updates the contract meta-data and proves to the network that the exploit has been fixed. '''10. Validate the patch TX.''' * Patches can be expressed as small programs that alter the code for the original software. The network can then use these additions to rebuild the original binary files from a deterministic build system, run test cases, and run the original exploit against it to see if it succeeds or fails. * The network also looks at how long it took the vendor to initiate a patch TX. Rewards and penalties are therefore assigned based on the rules originally set forth within the contract_tx. '''Goto: step 1 again.''' '''1. Continue running exploits and allocating rewards.''' * Because exploits are written against a particular version its possible to track at what point in the chain a patch defeated a particular set of exploits. This makes it possible to see what researchers submitted duplicate bugs and to give them a split of the reward (if specified.) * To avoid attacks all exploits must be disclosed within a reasonable number of blocks regardless of a patch TX. This ensures that rewards are given out on time and that rewards that have been allocated as a potential split between researchers are progressively given out to the first person in the chain to have demonstrated the bug. Perhaps penalties for false exploits can be given to the first researcher whose time was wasted. * A DAO between researchers and vendors can be established to verify patches if this patch system doesn’t work. Most of the complexity for this scheme is to stop people from stealing exploits and saying that they wrote them. Because if you didn’t use commitments someone can just copy the exploit hash directly from your transaction and try confirm it before you. I suppose this isn’t that much different from how blockchain notaries work if that helps.<span id="scalability-and-incentives"></span>
Summary:
Please note that all contributions to Matthews Lab may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Matthews Lab:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)