Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Matthews Lab
Search
Search
Appearance
Log in
Personal tools
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
P2p mobile carriers
(section)
Page
Discussion
British English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= 12. Advanced GSM secret contracts = Anyone who has used a VOIP app before knows how frustrating a bad connection can be. Voice-over-IP often uses UDP for transfer control which results in choppy audio and missed words when packets go missing. Once you start adding in latency it gets worse, and the people start talking on top of each other. VOIP can really suck sometimes. Despite these issues, we have had some success. We know that if the connection latency is 100 ms or less human-beings perceive it as instantaneous, and anything under a generous 1 second of latency allows for communication without hindrance [ux-response] (the equation for good communication is a little more complicated than that as it also takes into account jitter - a measure of the average variation in time between sending and receiving packets, but close enough.) With this information in mind there is one terrible limitation to using secret contracts: '''latency'''. Currently Enigma uses trusted computing for its secret contracts (and in the future multi-party computing) to divide computations between nodes in a network. If we’re to use secret contracts to encapsulate integrity proofs, then it will add every communication delay between Enigma nodes communicating these proofs, on top of a voice call. Ultimately, these delays will be so high that it will be impossible to use the system for calling, not to mention that it will require a data channel and we may only have voice minutes available. Clearly this approach needs some work! So to bring everything together: I will design the final '''service sharing contract.''' The new contract will allow voice calls to occur normally, offer greater control over what buyers can access, provide a basic way to enforce quality of service, and allow resources to be split up among multiple buyers (the original contract was limited to only one buyer.) The new contract will rely on micro-payment channels, secret contracts, trusted computing, VLR fuzzing, disincentives, and insurance. <span id="low-latency-service-sharing-contract"></span> == 12.1. Low-latency service sharing contract == * In this secret contract a '''seller''' wants to sell access to mobile credit on their plan to an unknown '''buyer.''' * The '''buyer''' doesn’t trust the '''seller''' to provide this service faithfully and assumes they will attempt to degrade service where ever possible. * Likewise– the '''seller''' doesn’t trust the buyer to pay for it. '''The following contract can be used:''' # '''Seller''' pays a small amount into a contract to register an account. They will use this as a sybil-proof identity in future agreements. # '''Seller''' and '''Buyer''' agree on the terms of the exchange such as (credit / price, and expiry time.) # A new secret contract (SC) is created that houses the agreement code. # '''Seller''' inputs credit amount, SIM key, IMEI, IMSI, and T-IMSI -> SC. # '''Buyer''' calls SC.deposit(credit * price (T)). The contract is now pending '''buyer''' acceptance and has a timeout (T.) # '''Buyer''' uses SC.get_auth_response(rand) to login to GSM. Carrier-specific codes are now sent out to check the balance remaining on the account and expiry dates. These messages must be generated through SC as the '''Buyer''' doesn’t yet have the integrity key. #* IF the balance == '''Seller''' credit AND expiry == '''Seller''' expiry, then the buyer calls SC.accept() and the contract proceeds to step 7. #* ELSE, '''Buyer''' calls SC.decline(gsm_received_credit_details) which checks the integrity of the input (if it can) using the internal SIM secret to compute an integrity key. #** IF input is a valid message then the '''Sellers''' account is '''BANNED''' from the network for wasting the '''Buyers''' time. #*** '''Buyers''' deposit bond is fully released. #* IF T elapses without an accept() or decline() from the '''Buyer''', the '''Buyer''' receives a small penalty from their deposit bond for wasting the *Sellers** time and the rest of their bond is released. # When T elapses AND SC has been accepted, the integrity key is released to a trusted processor on '''Buyers''' mobile. Running within this processor will be software that restricts the '''Buyer.''' Controlling: #* LOCATION UPDATES #* Browser and network activity. #* Service access on GSM #* [[File:P2p mobile carriers 6.png|thumb]]Billing #* Etc # If the '''Buyer''' wants to update their location they must do the following to avoid race conditions with the '''Seller''': #* Indicate an intention to update the VLR in SC. The SC.state changes to pending and a timer starts, LT. SC releases an integrity stamped update message to '''Buyer'''. During this time the secure processor doesn’t allow other messages to be signed. #* Issue a VLR update on the GSM network and obtain the ack values. #* Wait for the timer to elapse. #* Send a GSM status message to check if still connected: #** Processor returns signed_remaining_credit -> RC. #** If not OR LT timeout, processor restricts the '''Buyers''' access and buyer calls SC.finish(RC). SC won’t allow re-auth from '''Buyer''' in this state, either. The channel has now closed. #*** The SC.state changes to finished. #** ELSE call SC.continue(VLR proof value, RC). #*** Return failure if VLR proof value has a bad integrity value and isn’t a valid response using the old T-IMSI. #*** Change SC.state to “locked-in.” # When the '''Buyer''' is done using the service, they may call SC.finish(RC) to close the channel. SC.state changes to finished. <span id="detecting-contract-breach-by-a-seller-algorithm"></span> == 12.2. Detecting contract breach by a seller (algorithm) == In the event of a network failure the buyer needs to know if the seller is to blame for it. In fact, there should be a mechanism that '''acts as a deterrence against the seller using their own service.''' What I propose to achieve this is a novel method that allows the state of temporary identity allocations in a VLR to be fuzzed. Before I introduce this protocol it is necessary to understand exactly how temporary identities are allocated by the VLR, including every edge case. <span id="from-here-on-i-will-use-tid-to-refer-to-these-values"></span> === From here on I will use ‘TID’ to refer to these values: === * 2G TID = [t-imsi]. * 3G TID = [t-imsi] for voice, and [p-tmsi]. * 4G TID = [guti]. * 5G TID = [5g-guti]. <span id="behaviour-of-tid-allocation"></span> === Behaviour of TID allocation: === * '''Option A)''' The mobile device authenticates with the MsC and the VLR returns a new TID. To complete this protocol the MS acknowledges the TID. The state changes to '''{ NEW: { T: TID, I: IMSI } }''' in the VLR. * '''Option B)''' The same as Option A, but the MS doesn’t acknowledge the new TID. The state changes to '''{ OLD: { T: self.NEW.T, I: self.NEW.I }, NEW: { T: TID, I: IMSI } }'''. '''Authentication accepts both TIDs.''' <span id="key-aspects-of-session-management"></span> === Key aspects of session management: === * '''Premise 1:''' A subscriber can only exist in one VLR at the same time. If a subscriber roams to a new VLR their old records are deleted [premise-1]. * '''Premise 2:''' If a subscriber has an active session with an MsC and another entity tries to authenticate as the same subscriber, the first session will be terminated and result in errors [sim-cloning]. * '''Premise 3:''' A location update without successful authentication DOES NOT result in a change to TID state. Authentication is required [location-updates][tid-allocation-4g][tid-allocation]. * '''Premise 4:''' TIDs must be allocated after every new location update [tid-allocation], but in practice not all networks properly follow this [tmsi-implementation]. TIDs may also be deleted or change randomly throughout a session. * '''Premise 5:''' VLR information may become corrupt or expire if too old. * '''Premise 6:''' Different networks use different names for temporary identities. Refer to the following list for each network. <span id="putting-it-all-together"></span> === Putting it all together: === <ol style="list-style-type: decimal;"> <li><p>'''Auditor''' attempts to issue location update with latest buyer TID (BT).</p> <ul> <li>If MsC sends back an identity request the '''Seller''' must have logged in from a new location or acknowledged a new TID. '''Penalise Seller.'''</li></ul> <pre>{ NEW != BT }</pre> <ul> <li>If it asks for auth, BT is still a valid a TID.</li></ul> </li> <li><p>'''Auditor''' authenticates with '''Sellers''' IMSI and retrieves a new TID (T1) which they don’t acknowledge.</p> <pre>{ NEW = X OLD = Y }</pre></li> <li><p>'''Auditor''' attempts to issue a location update with BT as the TID.</p> <ul> <li>If MsC sends back an identity request we can conclude Y == BT which means the '''Seller''' attempted to use an incomplete location update to evade detection. '''Penalise Seller.'''</li></ul> <pre>X == ? (seller overwritten value) Y == BT</pre> <ul> <li>If we’re prompted for authentication it must mean that X == BT, hence no changes have taken place after the '''Auditor''' last checked. '''Buyer''' now changes BT in secret contract to T1. T1 is the most recent value.</li></ul> <pre>X == BT (buyer latest value) Y == T1</pre></li></ol> Before the buyer accepts the secret contract for the first time they authenticate with the MsC using the TID provided by the seller. Should the sellers information be correct, the outcome from this process will be a new TID which the buyer saves to the contract. The contract, and the buyers trusted processor control when the buyer can issue a location update. The rules state that the buyer has a set amount of time to issue an update, and to proceed with the contract, they must provide proof that the MsC accepted an update by providing a location update accepted message signed by a valid integrity key. Such a message may include a new TID, from which we can infer what the TID allocation state in the buyers local VLR should be. If the buyer is unable to provide such a proof, they must end the contract by committing their current credit usage. '''We cannot deduce here if a failure was the result of a malicious buyer, seller, or some kind of network failure, due to the presence of race conditions.''' Multiple parties can issue location updates here at the same time so penalty breaches shouldn’t assigned. In order to ensure that the buyers update has gone through, they simply issue an update, wait for a response, and decide on an outcome after a countdown X, reaches zero. If anything goes wrong before X expires, the buyer may gracefully end the contract, paying only for the credit they’ve used. No damages can be awarded to the buyer or seller in the event of a failed update due to the presence of race conditions. Thus, a seller is only able to disrupt service by anticipating the start of the X countdown- and if they fail to guess this correctly, the buyer will have proof of a breach and can penalise them via the fuzzing protocol. <span id="detecting-contract-breach-by-a-seller-fuzzing"></span> == 12.3. Detecting contract breach by a seller (fuzzing) == Once the new TID has been locked-in, we can infer interference by carefully examining the state of the buyers local VLR. First, we know that manual changes to the TIDs can only occur via a successful authentication and we’re able to control when that occurs for the buyer. Thus, we already know what state they should be in. Next, we know that two sessions for the same subscriber aren’t possible. So if a buyer is suddenly disconnected it may be the result of a malicious seller or a network failure. In this case, an agent (buyer or untrusted third-party) can immediately start the fuzzing protocol. An agent running the fuzzing protocol starts by checking if there is still a record of the buyers latest TID in (any) VLR. If there isn’t, their local VLR can no longer determine what IMSI it belongs to and thus will issue an identity request back to the agent. If that occurs we can infer the seller has interfered because only the seller has the keys needed to authenticate outside the secret contract and without knowing the latest TID. <blockquote>(Note to self: It may be that the fuzzing protocol should only be run against the same VLR that last stored the latest TID. I need to confirm this.) </blockquote> To differentiate this from a TID expiry, the buyer is required to issue periodic location updates. Since all incoming GSM messages to the buyer are encrypted and must pass through their secure processor for deciphering, the processor is able to track any changes that might occur to the TID, along with any sessions that might have ended uncleanly. To prevent the possibility of a buyer trying to hide TID changes and use them to falsely accusing a seller of interference, the seller blame process also requires a signed message from the buyers trusted processor attesting that their current TID is still valid. This handles any weird edge-cases that might occur during network failures. Assuming that there was no previous identification request, the VLR will acknowledge the current TID by issuing an authentication request. The question now becomes how do we determine the exact state in the VLR? We already know that if a TID reallocation isn’t acknowledge the VLR keeps a mapping of the old value = IMSI and the new value = IMSI. <blockquote>New = ? Old = ? </blockquote> Thus, if the seller has not interfered there should be no value attached to “old”, and new should point to the latest TID of the buyer. The exact state mapping can be deduced by authenticating without acknowledging the new TID reallocation, followed by a new session with a location update using the buyers latest TID. <blockquote>Old = null New = Buyer TID </blockquote> If the old TID was already equivalent to the buyers latest TID prior to authentication, then generating a new TID and not acknowledging it will result in the VLR setting the old TID to the value stored under the current new TID, and then setting the current new TID to a random TID. Thus, the buyers latest TID will get “bumped” off the VLR if a seller had already tried this, and we’re able to detect this by checking the result of a subsequent location update attempt (does it still acknowledge the TID or not.) '''Before:''' > Old = Buyer TID > New = ? (seller compromised) '''After:''' > Old = ? (seller compromised) > New = Latest TID Fuzzing in this way is very efficient because step 1 only has to run if the buyer has been disconnected, and it determines if a seller has authenticated in another location area in the same step. The following steps check if a seller is authenticating in the same location area as the buyer. Potential TID changes need to be tracked throughout a session by the buyers trusted processor, but as long as proof of these packets is fairly reliable, the fuzzing protocol doesn’t require much trust. An additional node could always be appointed to record incoming GSM packets for audits. The full protocol can provide proof-of-interference by integrity-stamped messages, and is able to be run by a third-party using secret contracts. <blockquote>Todo: There may be a better way to detect TID changes for a buyer. </blockquote> What’s interesting about this protocol is it appears to be resistant to race conditions in that any attempt by a seller to disrupt fuzzing only results in the protocol returning faster (since the buyers TID will be bumped-off.) This is a useful property to have because the VLR contains logic that ignores subsequent location updates which could be exploited. Another useful property of this protocol is any agent (other than the seller) can run it with minimal trust and use the messages returned to prove the outcome (they contain proof-of-integrity.) Should the seller believe that these messages are in error (perhaps by operator interference or a broken trusted processor) they may deffer to an auditor to run the protocol. <span id="detecting-contract-breach-by-a-seller-sanity-test"></span> == 12.4. Detecting contract breach by a seller (sanity test) == It’s possible that a VLR implementation will not be compatible with the fuzzing protocol. For example, if it were to allow more than two TIDs to be valid for a single IMSI. To detect this case: a dry run of the fuzzing protocol should be run prior to accepting the secret contract. The buyer can then decide whether to proceed without penalty breaches for a seller. It should be noted that regardless of what occurs, both sides are always free to close out their channels and pay what they owe. Should a sellers service become unusable or end prematurely, the buyer can always close out their channel and contract with a new seller. <span id="detecting-contract-breach-by-a-buyer"></span> == 12.5. Detecting contract breach by a buyer == In the new version of the sharing contract, integrity and ciphering keys have been moved from the secret contract into a secure section of the buyers mobile device. On the latest phones from Samsung there is a feature called “Samsung Knox” for running secret code on an untrusted host [knox]. It’s unknown how secure this is, so what I propose is a DAO can be used for insurance or bounties. In the event that a buyer consumes more credit than expected, a sellers contract will be terminated and any outstanding balance minus the buyers escrow can be paid out via the insurance DAO. A fee can be paid from a sharing contract into the DAO to be eligible for insurance. After the buyer accepts the contract, and integrity and cipher keys have been transferred into their trusted processor, they may manage to extract them and bypass credit limits. Fortunately, even if a % of users manage to extract keys and exploit the system, contracts remain viable as long as the DAO can cover losses. It should be noted that in order to claim insurance a DAO appointed auditor would need to have checked the initial credit balance for a seller. Otherwise a seller could contract with themselves and claim they just lost millions in credit. This would only need to be done if the contract is insured and the seller is claiming to posses new resource. After that, they can provision their resources any way they like without needing a new audit. There may be a way around this requirement. <span id="detecting-contract-breach-by-a-buyer-continued"></span> == 12.6. Detecting contract breach by a buyer (continued) == While trusted computing and insurance offer good safe guards against abuse, its still recommended that sellers take the time to lock-down their plans by disabling any obvious features of abuse (e.g. premium SMS / calls, group calls / roaming, international, etc.) The risk of abuse is less when a plans resources are being sold to the same buyer. Because by convention, the buyer must be able to fully pay for the resources so their escrow will always have enough to cover the cost of service. It’s only when resources start to be divided between different buyers that you run into problems. Consider a seller provisioning resources to multiple buyers. Each buyer is only going to pay for the resources they’re interested in. So a malicious buyer who is able to bypass a secure processor can consume resources reserved for other buyers. And who should pay the cost if insurance wasn’t included in the contract? The other buyers haven’t done anything wrong. One safe-guard to put in place might be to have GSM I/O go through a randomly selected node that acts as a packet notary. These notaries will only relay ciphered GSM packets if the first N bytes don’t match a certain pattern. They will also record the meta-data for ciphered responses from the network which can be audited to check for TID reallocation. Notaries don’t need to see a full conversation, only a small portion of each packet, and they would also hide a buyers location from the VLR as a cool bonus. '''The following sections list a few contract examples''' <span id="a-contract-for-faster-internet-speed"></span> == 12.7. A contract for faster Internet speed == Within a mobile network the maximum Internet speed that a customer can achieve is capped to prevent interfering with other customers. The only way to increase speeds is to buy a better data plan (if any are available) or purchase additional plans. With a second plan the theoretical maximum resources available for downloads + uploads is doubled, but within the context of the web, most web servers (and most browsers for that matter) are only built to stream data down a single connection. Consequently, two mobile plans might allow a page with lots of elements to load faster but it won’t increase the speeds down a single TCP stream. <blockquote>There is an exception though, and most people will recognise it: [torrents]. </blockquote> Since torrents split up files into chunks, each chunk can be streamed down a different TCP connection over both mobile plans. So in this scenario its very easy to utilise all resources. But are most people willing to pay twice as much, for twice the speed in such a niche scenario? Probably not. Fortunately, the service sharing contract offers the ideal primitive to build something for this niche. Consider a scenario where a person has unneeded data left on their plan and they would really like a faster connection right now. In this case, they can form a contract with a group of sellers to buy immediate access to their service plans in exchange for the buyers having access to their plan at some point in the future. [[File:P2p mobile carriers 7.png|thumb]] The service sharing contract allows bandwidth and other resources to be aggressively leveraged for a faster connection. It’s quite interesting to note that the seller can define precise limits on speed in the buyers trusted processor. Meaning that plans become more fungible and can be created on demand to suite the needs of a buyer. Note: multiple phones and good networking knowledge would be provided to utilise this contract, but I can imagine an app that would make this easier. <span id="virtual-micro-carriers"></span> == 12.8. Virtual micro-carriers == There are many unique features that differ between mobile plans. For instance, some plans may cater to businesses more than others by offering cheap fax service. While other plans may be more suited to teenagers. The service sharing contract turns every potential plan into its own virtual micro-carrier. These micro-carriers are free to design entirely new mobile or Internet experiences through the use of trusted code. Many new services can be combined into a single package, complete with its own access rules, and sold on an open market. [[File:P2p mobile carriers 8.png|thumb]] One exciting consequence of micro-carriers is the ability to create backwards compatible improvements to the mobile system. These improvements might include better payment experiences, or even novel voice services. The service sharing contract makes any improvement liquid commodities that can be traded or used by machines. <span id="self-routing-programs"></span> == 12.9. Self-routing programs == One very strange program that can be built with micro-carriers, is what you might call a “self-routing program.” A self-routing program maintains a token balance inside a contract and uses it to buy data service from a decentralized marketplace. The program would ensure that it always has a way to access the mobile Internet and buys enough access to trusted mobile processors to be able to keep itself running (these devices may not have any credit outside of what the program brings.) By decoupling the mobile plan from the host machine, a program is able to control the level of connectivity it has to the Internet. It may not seem like it at first inspection, but this is very different to renting servers. A server is a fixed target and its Internet access cannot be transferred to another host if it goes down, hence p2p network services lack that level of control. A self-routing program can use the same plans on different devices, and move between them as they go offline. In the future, a self-routing program may turn out to be a better way to create “unstoppable services” since Internet access is bought directly rather than relying on other people to maintain routing infrastructure. <span id="credit-that-never-expires"></span> == 12.10. Credit that never expires == Selling temporary access to a service plan can be precisely controlled to specify properties like speed, data usage, and so on. This level of fine-grained control allows service plans to become liquid commodities, and once that happens, they will be able to be tokenized and traded freely. Implementing credit that never expires from this becomes as easy as selling remaining resources for stable tokens and then buying them back when they’re needed. No more expiry, you actually get what you pay for. What other contracts are possible? Could a contract for lower latency be created? That would be useful for gaming. <span id="conclusion"></span>
Summary:
Please note that all contributions to Matthews Lab may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Matthews Lab:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)