You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have a crack at a keyspace, but don't care that much if we get it, Just speed it up a bit.
We are serious, and we know that brute-forcing one by one isn't going to work out. We need certain conditions, like skipping n out of m keys, like if we want to gather them in huge batches, and throw out n/m of them. E.g. Gather 1 trillion keys, and use n/m like 50000/100000 to exclude 50000 out of every 100000, maybe according to some kind of rule, instead of just the first 50000, maybe every 2nd key.
Another nice feature/enhancement would be only generating according to rules where we know what the private key will be like. So for example if I generated a gazillion private keys and they all started with 1111 .... then I could use -r 1111 which would specify some algorithm to only compute and try keys in the search space that start with the 1111 bit pattern. Or specify hex, -r 0x1 for tossing away keys that do not start with a 1 bit.
What do you think? Would it be terribly slow, considering it pre-computes then tosses some away, or can it easily just find a way in the case of -r 0x1 use something other than increment key by 1, and still be just as fast?
Thanks!
The text was updated successfully, but these errors were encountered:
I have something to share with you, if I could, in private. May I send you an email?
Or you could visit my profile where you can send me an email. I promise I won't hassle you, just a couple of questions and something to share.
... Ah yes, sorry I haven't tipped yet, I wasn't following the date correctly. I get paid today but I won't have BTC to tip until my pay hits my account tomorrow. Thank you!
Suppose we just want to either:
Have a crack at a keyspace, but don't care that much if we get it, Just speed it up a bit.
We are serious, and we know that brute-forcing one by one isn't going to work out. We need certain conditions, like skipping n out of m keys, like if we want to gather them in huge batches, and throw out n/m of them. E.g. Gather 1 trillion keys, and use n/m like 50000/100000 to exclude 50000 out of every 100000, maybe according to some kind of rule, instead of just the first 50000, maybe every 2nd key.
Another nice feature/enhancement would be only generating according to rules where we know what the private key will be like. So for example if I generated a gazillion private keys and they all started with 1111 .... then I could use -r 1111 which would specify some algorithm to only compute and try keys in the search space that start with the 1111 bit pattern. Or specify hex, -r 0x1 for tossing away keys that do not start with a 1 bit.
What do you think? Would it be terribly slow, considering it pre-computes then tosses some away, or can it easily just find a way in the case of -r 0x1 use something other than increment key by 1, and still be just as fast?
Thanks!
The text was updated successfully, but these errors were encountered: