Press "Enter" to skip to content

Bitcoin Cash Development article meeting

Bitcoin Cash Development article meeting

We apologize for all computer incompatibilities around the world. With that in mind were only six minutes late starting so Id like to get going. To give you an idea of what were going to do today: this is what I hope is the first in a series of many discussions that will become public for the development teams working on Bitcoin Cash and the main goal of this meeting is to discuss the potential items for the next hardforkupgrade to Bitcoin Cash to determine which items are realistic to consider for inclusion in the may 2019 upgrade and determine the status of each of the items listed and if further discussion is required to solve any issues that there might be. So the the specific issues will go through them one by one.

And Im going to start now with doing some introductions and so my top lefthand corner is Jason Cox. Jason if you can introduce yourself? sure. I am Jason Cox Bitcoin Cash developer currently contributing to Bitcoin ABC. Thank you. Antony Zegers. oh hi Antony Zegers.

Im known as megarian online in forums and stuff and yeah I work on Bitcoin ABC. Amaury introduce please? yeah so Im a Amaury I am the lead developer for Bitcoin ABC. Mark? Hi Im Mark Lunderberg.

Im just sort of getting into Bitcoin Cash and trying to help out with the development process. Just getting started. Thank you Mark Emil? So Im Emil Oldenburg. Im the CTO of

Okay thank you. Chris? yep Chris Pacia. I work on Open Bazar and also the BCHD Bitcoin Cash full node.

Okay Thank You Andrea? Hi everybody Im Andrea Suisani. Im a Bitcoin Unlimited developer. Thank you.

So a small group with us today and were going to cover a number of subjects as I said earlier the main goal is to determine whats realistic to be included in the next upgrade. The items that are going to be discussed, Ill just run through the list of them first and then well dive right in. The first item is BIP 62 items. And Ill just read what was written: does it make sense to activate the remaining items null dummy and min minimal data push should clean stat be reversed.

Second item on the list is the 100 byte transaction size. Should that be changed what is the best approach for this. Third item is Schnorr. Is there any chance this will be ready in time?

What needs to happen to progress this item? And fourth item is opcodes. Is anyone motivated to take responsibility for these? Someone needs to take ownership and work on it if it is to be included in the upgrade. Should only be activated MUL, INVERT question mark.

And then the last item is there any desire to rework the SigOpsaccounting? So I think well throw it open for discussion and with BIP 62 items. Would anyone like to dive into discussion on BIP 62? Jason. So just to get started I wanted to make sure everyone here understands kind of what BIP 62 tackles.

My understanding is i its primarily about malleability. And recently what was implemented in the last hard fork was the implementation for clean stack and enforcement of clean stack. I also understand that enforcing this has caused some addresses to be unbendable. Like people spending SegWit UTXOs on BTC sending the same UTXOs on Bitcoin cash is now impossible.

If someone thinks this is incorrect or inaccurate please add to that. Because thats my understanding so far. Well its.

Yeah just to clarify I think its basically when people accidentally send Bitcoin Cash to a SegWit P2SHaddress. So yeah that shouldnt be a normal, thing but I guess people that are doing that by accident. And the the clean stack prevents miners from being able to just to like save the peoples money.

So people cannot redeem their their SegWit coins on BCH right now. Well if you send. if someone accidentally sends Bitcoin Cash to a SegWit address, like basically you need to get a miner to help you recover that. Because it, anyway its hard to explain, anyone could mine it I guess. Like the miner, you have to have a trusted miner to be able to get your coins out of there.

But I think that there has been in the past a few miners that have been helping people out to recover their coins when they when they do that. But I mean I guess I dont know if thats still.I mean I guess my suggestion would be someone needs to find out if this is a big problem. Or if there are actually miners still able, like willing to help people with it.

I dont really know what that information is. As far as I know is still doing it. No. Its not possible to do it since the last fork.

So I think we have a first action item here: Is to make sure everybody is aware of whats going on. Yeah do we know the scale of the impact?Like number of users? Amount of money that is now unspendable due to this? We have no way to know. Okay.

We would need to index all SegWit addresses first and then check the UTXO. So thats quite. well it takes a while. But theres no way to know.

Because they are P2SH. So you know.inaudible.address. Does it apply to both the P2SH and regular SegWit? No just P2SH.

So I guess in terms of BIP 62, I guess my my impression of it or my take on it is theres basically no point. Like you kind of should do all of it. Like if we have some hope of doing all the items then maybe theres some value to that, but if you still have even one malleability thing left, theres not much value to just doing one or two.

So I dont know, like if were not. unless were gonna do. unless theres some motivation to do all of them, maybe it would make sense to change this. But yeah it would also make sensitive know what the impact is. I just like to say one thing which is: it is in principle possible to have all the BIP 62 relevant things like to have all the thirdparty malleability fixed without breaking any coins.

But it requires a little bit of a different approach. So it would be like a new, you know, it would be a new hard fork to to move to that sort of approach. So one thing that might make sense is even to just roll back what we have right now and then put in something better later on, or I dont know thats possible. So for that something better are you saying like a new transaction format or something like that? Oh no, just well so for example you could say that you only apply at a clean stack rule to the pay to public key hash and pay to script hash multisig, very standard sort of transactions.

That would be one way to do it. I dont know if thats convenient but you would say that every other script would have to manually check by itself using Op_ Depth and if you dont want to be malleable or if you dont want your transaction to be malleated then you have to use that sort of additional mechanism in your script. So its kind of a workaround but that would. .thats in principle possible. Is there any further discussion on this item? well I think that what Amaury said about an action item, we should maybe try to make that concrete so that we actually do. someone actually does something. So I guess that its a matter of understanding if we. do we want to fix all the third party malleability that BIP 62 is fixing?

If yes then we have to go forward and then have a measure of the amount of coins that are locked in P2SH SegWit addresses. if it is possible some way. And once we have these two data points we can decide what to do. Otherwise if there is no coin locked in these P2Sh SegWit addresses, well we could just go ahead and keep the fix that we have now and then once we assess if we want to the other fixing in the next hard forks well we could do it. But we have to assess what we want to do first. and then in understanding of whats the measure of the problem that we are tackling.

To kind of add to that question of the impact: is it only the SegWit style addresses that are impacted or are there other use cases that we havent discussed. I know the SegWit one is the common one. Because if there isnt maybe that helps limit the known impact. So the flag that has been activated has been standard for quite some time.So you need the miners cooperation to spend those coins no matter what.

Okay. Thats why its not that hard for miners to add apis and tools for their users or anyone actually to just submit those transactions. Is that something, like who would. is that something you guys could do Emil? Like try to, since you have actual users yeah It is actually something that we have considered to add an API where you can submit it on standard transactions.

But only like if they follow like a specific format so we dont mine crazy non standard transactions. I was just talking about finding out if theres a problem with people accidentally sending their coins to these addresses. If thats an issue or not.

Have you guys encountered that? We got a few requests after the Bitcoin Cash hard fork but that was a long time ago. Yeah nothing recent. I dont actually know the status. You would need to like index all SegWit addresses on the BTC Network first and then run that with the UTXO on Bitcoin Cash.

So theres a quite large data set to go through. Some people are doing that deliberately though because the SegWit addresses are anyone can spend on BCH. So what was happening in a lot of cases where as soon as somebody spent their SegWit coin from the same address, some miner, some unknown miner would come in and just gather up all those accidental Bitcoin Cash coins on those UTXOs. Because any miner can take them once the Redeem script is revealed. So actually now we have a better situation that we had before the last fork.

Because miners cant take the coins for them? Like they are stuck for everybody right? Yeah.

But is it better or worse? I dont know. Its different.

Sorry I should have chosen a different word but its different. thats it. So in addition to the impact we should be determining how many people perceive this as as something that needs to be fixed. Because you know maybe theres a large number of UTXOs potentially Impacted but if people dont feel strongly about it then maybe its not worth fixing. I asked that from an addict my experience in in the first few months after the August 17 for have heard from fellows and Im also other people on the internet right whatever complains about the fact no complaints but basically people that wrongly send BCH to be DC but this fadeout like since we have the new address for mark and other thing and people get more used to the fork basically but it could be just that I am you know no bias bubble but I know and its being a very long while since Ive heard someone tell me that there is some kind of accidental sending from BCH to BBC but just nothing more all right with that in mind suggestion to move on to the next item theres no objections on enemies before we move on maybe we should find an owner real quick just for someone to follow up on the impact ahem oh are you able to take that is that something that coin comm would be interested in looking into Im not sure we can commit to that right now though we have a lot of things in our place yeah so basically I guess the status of this item is we just need more information seems like all right with that well move on to the next item then the hundred byte transaction size should it be changed and what is the best approach for this and who would like to start off on that in I can start off on that yeah thanks yeah so when the when the last consensus gene I was pointing out this 100 light limit you know in principle it could affect some transactions so there were like 10 transactions since they August 2017 hard fork or something like that ten transactions only that were less than 100 bytes I think for them were coin basis something like this so so it could be it could be relaxed 64 bytes to have the same intended effect so just to remind everyone the intended effect here is to prevent a technical vulnerability in the Merkle tree that you can have a what is it you can have a node that looks like a leaf or something like this I dont remember the exact thing but you could you could relax that to be a 64bit limit and that would be certainly enough for everybody I think yeah so this relax rake rates headache for the mining pools because theres another rule that says that I think its the coin base input can only can Mac and it only B has to be more larger than 100 but no wait it cannot be larger than 100 bytes and the total transaction cannot be smaller than 100 bytes so theres like a lot of extra rules that you need to add the new mining pool to make sure that you dont accidentally mine coin base that is too short because the way mining pools work is that you configure the mining pool in a config file to specify your mining pool name and if you start a mining pool you want to be anonymous you dont put anything in the coin base so if you want to so it means that you need to fill up your Quintus matches message with random garbage if you want to be an anonymous miner because theres only anonymous miners that risk mining a coin base smaller than one our advice like if you add your your mining pool name like polar theyre gonna calm this is usually not a problem but if you dont want to add anything risk invalid logs so would it make sense dropping the side transaction size limit to something like 80 bytes is there any use cases between 64 and 100 Im not personally aware that you but if youre going to change it why not just do it the yeah the the in the since we have to change it why not saying everything higher than 64 is okay rather than put an arbitrary 80 or 90 we have to change it I would even go further and say everything different from 64 bytes or yes I find it more logic rather than put another random numbers yeah Ive heard this argument a lot and it does make sense to me the only thing is is usually you want to design a you know a critical system like this to have the behavior that youre looking for and it should match the use cases that you want them to match so when were talking about transactions that are below say 80 bytes and Ive asked a number of people this like what are the use cases for these really tiny transactions and I havent heard anything very useful other than the exploit which I guess is the use case technically so by simply saying you cant have a transaction smaller than 80 bytes youre limiting any strange behavior that was not intended by the system makes it easier to devise the system to be anti fragile its not that were tackling any particular exploit its tackling exploits that are currently unknown the thing is that we are conflating two things here before we introduce the the constraint on the transaction size what was the situation like there were no that that Im aware were then I am aware of the window constraint um on size like 86 70 whatever right didnt have it okay cool so we are saying now that we probably made our stricter than we should have done and we are going to relax it and yeah this is the first thing the other thing is since we are designing a very mission critical system we want you to while we are changing again we have we need to think carefully if we want to move to the situation we have before or we just want to relax that the constraining a little bit like 80 bytes and then I see your point but its like we are mixing the two things here we are there is 2 2 2 2 thing on the plate the first one is we want to go back to we where before and the second one is oh maybe where we were before wasnt so thoughtful like we should have had in the first place so just want to underline that the only thing Id want to add to that is its easier to to drop this limit it is to raise it so if we lets say we do drop it all the way down to 64 and then we later find out oh theres a somewhat malicious use case add lets just say 65 raising that limit back up is a little bit more complicated in terms of deployment but other than that there is not a strong case for it but you say that is more complicated because you are more strict but but since we have I think that we are going to introduce this hypothetical rising in this constraint be an r4 right so I guess that yes it is more complicated but it is not a fork so people need to at least four fou notes talking about SPD wallet will be different probably but yeah whats a more complicated is that because you need to you need to search through the mempool and find the violating transactions or something at the fourth time is that what it is or but is it something that we are yeah the this is this is a complexity that that should arise one honor in the period of time when the fork is activated yes I think shes one of them but it is something that as as far as Im aware hold it all the fool knows that I didnt know the code oft have a mechanism in place to encourage the memo during this this activation time and also if there is a rollback that there is the that the codes slacks look after it like dealing with rejection differences between the two states but yes this is one one one complication indeed anyone else like to weigh in on possibility of changing the under fight transactions that I guess just says implementation detail is it gonna be the case that its cut the the size is kind of retro actively reduced so like I mean barring putting any kind of checkpoint aside like after May would you be able to go back and start a fork from before May with sixty four limit or something like that or would it or would we have to maintain two limits like if the height is between these particular ranges then the limit is 100 if its after May you know then then the the limit is 64 type deal so in terms of software maintenance I think it would be really bad to enforce the retroactive limits I dont think this would be positive for any implementation to do so so in theory yeah if we changed it to only 64 bytes being the excluded transaction size then you would in theory be able to go back to the fork point and start mining you know 65 or 80 or 90 by transactions that would be valid at least thats the way I envisioned it so are we able to determine an owner for this you know just just saw it like that this doesnt have to be a person that implements it on on all the node software for example its just someone who kind to kind of drive it and stay in communication with everyone to make sure its done maybe write it writing a spec some notes on why we decided to you know tweak this constraint I could take it okay thank you thats great thank you any comment on this not really no I mean I guess my overall take on it is that its yeah like maybe it was maybe it was not a article wasnt the perfect thing to do but I sort of wonder if now that is in place if its if theres really like a strong motivation to change it again but I guess thats part of what what Andrea can investigate maybe I like the last one I just feel like we dont have enough information to really know next item on the agenda is the nor is there any chances well be ready in time what needs to happen to progress this item Im going to take a stab in the dark here Emory would you like to speak on it yeah so there is an implementation of Nohr that have that Ive made like more than a year ago now its not been through the kind of review required that I would feel confident to deploy that in the wild at this point in time and if that doesnt happen very soon then we wont be able to deploy or it may we have like one month and a half in addition to the algorithm itself we need to integrate it into existing of cards so its it requires some time in itself so if their review of their algorithm itself doesnt happen very soon this is not gonna happen um can you briefly touch on the news cases for the people listening in on us yes so Norris and all their signature algorithm Denis CTSA its its more flexible in many ways than a CSA the reason why ECDSA become more of a standup and snore is because for quite some time Stoll was patented so this is actually the number one use case for a CSA and the reason to be for ECDSA is to provide an automatic to snore it that is that is not patented so snore as advantages in term of validation because we can do what we call batch validation meaning you can take for example edit signature and do some computation that verified the X signature and that computation is not as expensive as checking a time when signature right so when we have plenty of signature to check which is the case when we receive a new block then this is very advantageous to be able to do that thats thats one big advantage from a user perspective its also interesting because user can do aggregation and releases and stuff like that that look like just like regular signature so this is an increase in privacy for those users and this is also better for the network itself because it just has a signature to verify regulus if its a wicked are a two thre multisig over in in all cases it look for the network like the same and its just one signature to check the other discussion on I guess Im just curious like further review like do you basically need guys to people who can look into the like the crypto math and and all that kind of stuff kinda what youre looking for heard when the mass itself has been out for many years now so that part is fairly well covered but when it comes to cryptography you need to implement that in in ways that are very specific you need to make sure it you dont leave somewhere in memory some piece of secret data thats you know some some water cut on the machine could not you know go rumbles for the memory that you left behind you and find some secret data you need to make sure that you implement it in such a way where you have no branches and no memory access that depend on the secret because then you have sidechannel attacks through the branch predictor of the CPU and through the cache hierarchy of the CPU which likewise allow some third party to recover information about secrets so there are there are all kinds of there are all kinds of very specific stuff that you really wouldnt care about in general card that you need to be careful about for this kind of card so it needs and those are not stuff that you can test really so it needs extremely careful review yeah is the kind of really it means way more really than regular piece of cake just a comment on whether the the parts that are necessary for for consensus for validating blocks and checking transactions that sort of thing those wouldnt have any secret data so do you feel more confident about those parts like the parts that are not generating news lets say yeah so thats true there are water pitfall for those parts but obviously they are not pitiful is Privitera those pitfall maybe no for instance they are various places in the cut where you would hash some value and then check that the result of the hash is a bad its color spare aspirant addictive curve that we use and the thing is you know the scanner is like not to poor g56 but a number that is slightly smaller than that but its never you know that stop with a bunch of apps so its actually very difficult in practice to find the preimage of the hash thats not fall into that French but this so so its very difficult to provide actual tests that are gonna trigger that card right but and you know all that correct anyway because maybe at some point some dude is going to find that the one value for which it can produce ash within the right range at this point you get a chance please so yes so thats thats another kind of people that that you need to be very careful about that its gonna be uncovered Theory HUMINT and by testing because we dont know of any any you know preimage that fall in the right range at the moment so I mean weve got a I guess a couple different ways a that Schnoor signatures can be implemented it seems like it seems like the simplest way is to kind of just overload the existing map check sake but that also seems like the most dangerous way to do it too because were essentially exposing all UT Xers to this new code III dont know what you guys think about like you know the the security of that or not I mean that seems like the the nicest way to do it if it was if this was like really battletested stuff but it worries me a little bit that its like new and its exposing all the old duty Xers to it yeah thats why this has to be done perfectly yeah I would add to that that the security assumption made by snore are the same security assumption made by a CSA so if you were to find a way to break the nor sing in tourism that would most likely means that you can break the CGSA algorithm that we currently use I guess you did make one comment though about you know reducing the number of branches I know this isnt exactly the same but by implementing another signature scheme on old UT EXOs kind of feels like were introducing a branch where its basically you can you know theres a couple different outcomes you could get in order to sign a particular transaction for old UT EXOs but it doesnt feel right you know you even know what you said is the assumption is the same for both it just it does make you kind of squirm so those branches I dont see them as the most risky ones because this part of the code is wanna represent the terminus teeth if we are talking about the interpreter so its very easy to its very easy to you like all the unit tests that we need that make sure that this part of the code is not gonna do something real yeah generally stuff like interpreters and compilers are you know very easy to unit test extensively so Im not too worried about putting a branch there I would be more worried about you know being a branch in the network here the DB there are you know actually think that is multithreaded or or depend Oh faster body reacts you know but but that part is very easy to fit input and check the output anything further on this item we will need an owner because armory has been asking for review on on a short code for a little while now someone to kind of drive this review home over the next month and a half like you said otherwise the review can be ongoing but it wont make it for the next part fork so it depends if we view this as you know valuable enough to put a lot of weight behind it in short order yeah I can take that Ive been writing the sort of the ocean or opcode spec so far and I think that yeah theres the theres a little bit of controversy there with exactly how its done but um perhaps if the concern right now is getting a sort of a cryptographically secure implementation that can I cannot review that at least and try to try to get people on board with that yeah no Ill talk with you after because I I want to talk about getting more reviewers maybe even potentially some people outside of the Bitcoin cash space because I think this is something that can be reviewed by you know cryptographic experts and that sort of thing yeah yeah and I think its fairly clear from this call today that you know theres an invitation for people who are going to be watching the recording or any of the attendees right now if they have an interest they can contact you guys directly on that so if theres nothing further on Schnoor at this time well move on to up coats the old up coats and Jason maybe Ill get you to help me with this and what I have written down here is anyone motivated to take responsibility for these someone needs to take ownership and work on it if it is to be included in the upgrade should only be activated should only some be activated for example Mel or invert right so we actually have diffs available for it I believe all of those codes that that were recommended for the hard fork there is review that needs to be done theres tests that need to be written but other than that the implementation is more or less completed as my understanding that go somewhere you can correct me on that if Im wrong so really we just need someone to own this make sure that there is a complete spec available that theres plenty of unit tests that are available so that all the implementations can go and implement these and make sure that were all doing the same thing this is mostly an ownership issue as opposed to you know writing code and implementing it so so there is one issue or at least potential issue in the order flow semantics this is something that Ive been raised in the up group at the time when Im Shane came to us with those are codes because the number system used one complement in Bitcoin instead of to complement like created our regular way of like you know all the state of the art and family always working on compilers essentially its completely moot completely useless we need to have someone look into the overflow behavior and make sure that it does make sense and it does what we expect as long as there is nobody that is willing to do that or it can be rewritten too implement stuff like that can overflow things like invert for instance or shifts can be implemented but they need someone to take ownership of them and track that to make it happen yeah I agree I guess just to throw my two cents dont have this much value but I guess my impression of this whole thing is that everyone essentially agrees in principle with with having these but no it doesnt seem like anyones super motivated to actually make it happen um so yeah thats kind of my my impression the other thing I was wondering I dont know I just really shift the whole shift and her shift seemed like there was some discussion about whether that was done in the best way or not I dont know if thats an issue or not either yes that was that was discussed and the question is because the number not only on one compliment but are all little endian even internally you you end up with ships that cannot work with binary blobs and numbers the way you would expect on a written or instruction set so at some point you need to choose and its gonna be broken for one of the two and the decision that were made at the time was that its more useful to use shift on binary blobs than it is on actual numeric values and so this is what this is what people went for at the time I I dont think there is any new information that you validate that conclusion that came up since then by any chance do you recall in every year the use cases that people had envisioned for the binary shift binary plug shift yeah so generally like you may want to use some data on the stack and a circuit at that int PC is to verify some part of it or aggregate it so maybe in the case of an Oracle for instance we have an Oracle and so you have some data that is provided and you have a signature on those data and then you have a part of the script that verified that those that are contained this or that information so in those case stuff like you know split and shift and stuff like that that allows you to select pieces of the bomb are very useful yes thats the main thats the main use case but you can achieve that already with split thats pretty its a little redundant in some cases yes we can do that we split split only allows you to do it at the bike granularity so maybe like if you want to put the series of flag for instance you may not want to have one pipe or flag for instance do a shift and a mask and I know you get the value of the flag you can do you can do more polymer stuff with shift then you can do with split but but youre correct like its not like its any bling anything new you could do you could do all of that without the shift just like you can do app model with a bunch of additions and a few if statements but its really useful I guess so thats Im looking for an owner I would actually like to take this one myself except my my time is constrained and kind of stretched and across some other things at the moment I think everyone here might be kind of in that same boat but does anyone know anyone outside of this meeting that may be interested in in taking this as an item writing the spec and making sure the test coverage is too long on those facts I could ask to other bu devs if there is something interested in but not sure that because we we included aint a implementation in the SB client that we produced but we just bring the code and plug it in just to be sure to be compatible bark for dog like we didnt change anything in terms of code the review has been done but not as like a Maori said not as tofu thoughtfully like it should have been so it could be that some some of the the guys maybe you wanted to do that bar okay I cant say for sure I could ask yeah could you do that please and get back to me with that because maybe we can coordinate yeah Im finding an owner for them okay moving on to the next I dont know theres no further comments is there any desire to rework the sing ups sit ups accounting Antony brothers forward do you want to speak to it first I mean again its one of these things that that its kind of always been hanging around as an issue and its its not really urgent but its I dont know I guess I just figured it was worth the list um its a little weird right now how this fig ops are are counted doesnt really sense in in a few ways so I guess in the long term that it seems like its something that should be dealt with eventually but it also doesnt seem like its super urgent so I dont know if anyone else has thoughts on that to just kind of add to what you said basically the sig ops counting is done on a per megabyte basis so youd like packages up the first megabyte of transactions and council sig ops and then does that for the next megabyte what it really should be is the sig ops over the entire block and making just making sure that the sig ops per per megabyte is you know lower than a certain value but its its not the only problem though that the way see cups are complete makes your own sense whatsoever its overly complex I guess the idea like yeah like the way it come out it doesnt really make sense oh sorry but seems like if youre gonna change titanium as well make it right so I guess thats the issue is its a bit of a bit of a bigger change than just yeah so making it by making it right you need to essentially count the number of cycles as you execute them to know what number of sig ups your thirty did in that large because right now its counting its counting sig ups in the output of the transaction that are not executed in the block its not contain various C jobs that are in the input unless there are PD SH input in which case they are counted that the whole stuff makes your own sense and and doesnt even reflect accurately the number of C cups that are required to validate that not you have to summarize its basically just a bunch of fat heuristics yeah like the multisig thing is weird it accounts to 20 all the time no matter what and and stuff like that reduces our so anyway I guess I just in keeping track of whats good like the various items I just thought I would raise it as an issue but I dont I thank anyone you know I just just to keep it on there on the radar but yeah I dont know if anyone like it has an interest in working on that or not I was aware that Angie stone was thinking about it and and has some idea on improving it but as soon is not not heres I cant speak for him another question is please is it actually possible to follow the path of correctly counting the seagulls why are executing a wild validator block or or is it an over killing and having a better heuristics set of heuristics would do the trick like do we really need to go through the exact accounting for all Zig offs or something and estimate with a better estimate would be would suffice No so counting you know like doing 1 to some volleyball when you verify a signature is like probably not even gonna show up in a key kind of profile right but the way C graphs are counted right now not only is like not I create but its actually fairly expensive because you need to parse all the scripts twice want to execute them and wants to count the C cups with eristic that we are using so its boring gonna be faster to do it as we execute there is one there is one tricky there is one like yeah there is one tricky situation that we need to make sure we take care of is when you transaction in your main pool and you cash the result of the script execution you need to make sure that you also remember how many C cups how many seagulls were done during that script execution so so terrorist we need to extend the cache to cache also the counting of shakeups for the the cache of discussion and introduction yes yes so we need to make sure that the cache keep track of the seagulls count if we want to catch anything but beside that there is no matter although and its for gonna be cheaper and again not just more accurate okay yeah I mean I mostly agree that its something that would be nice to fix at some point I dont know if if this coming hard fork makes the most sense because it seems like something that takes quite a good deal of planning like much more than say those opcode in particular I did have a question with Jason was talking about with the way it currently handles it on like a per megabyte me it sounds like my my code might actually be wrong on this because what I do is just take this nigga ops per megabyte and like multiply that by like the expressive block size to get the max sig up count is that is that not the way you guys handle it yeah I know thats not correct the way its done right now is that you take the block size you round it up to the next multiple of one megabyte right so if if the block is like 1.2 million right for instance 2 megabyte and then you apply a limit of 20, 000 see gobs per megabyte if the limit that you computed is 2 megabytes you multiply 2 by 20 thousands and you have to that you can X I said so its not based on on your accessory box size then no so the way you do it is only fine as long as you follow the chain if you want to be a miner if you want to mind with B CH D you need to fix that yeah okay yeah I think that we just demonstrated the confusion around the current implementation like we would like to fix it so its much simpler like what you described Im actually surprised by how generous the limit is its like one one check stick for every 50 bytes or something like that which is more than you could normally do you know yeah so there are reason to do that mostly due to historical factors and to the ways hiccups are called it so he can actually get at the city of C cups that is higher than that very easily the reason is it counts cigars in odd foods so if you have a bunch of pay to script a short with each of them is like less than 50 bytes so if you have a bunch of try a transaction with a bunch of outputs like you know a ratio where they have way more up to the inputs its actually you cannot run it to believe it stupid its its because its counting the wrong stuff any other comments on reworking this thing up accounting okay I also have listed any other items to consider and this is specifically for the May 20 19 upgrade I have one point I would like to bring up or at least float the idea so currently since the Bitcoin cash hard work we have kept increasing the the block size but one limit that has not been touched is the shame of unconfirmed transactions so I would like to float the idea that this is that this limit is raced the problem is like weve been doing some experiments with this and its a big headache if nodes are not configured the same so the only way of actually doing this would be that everyone activates new rules at the same time which would be at a heartful time so this is not a hard work rule or anything but it should be if this limit is raised it should be raised at the same time as it should be activated at the same time as the hard work just to make sure that all the nodes have the exact same configuration yes so youre correct we actually run into that before when we change the size of up return where you do it in synchronization with the are fork even though its not a processes change per se but if you dont do it with an activation point you end up essentially completely breaking zero curves on the topic of specially chaining transaction I agree that we want to get rid of that leave it at some point however right now because of the way the software is written every time every time you accept new transaction or remove transaction from the limit when you do a graph traversal or all the childrens and parents of it and so if you dont limit the death of that you may end up doing its only expensive like its its not a competition that grow linearly its like exponential factorial or some stupid you know complexity like this and so you wrote very very quickly you expose yourself to a lot of resource usage when when you increase that limits I would rather rework that card so it doesnt do anything stupid and then get rid of the limiter together yeah we are there was actually a study done not too long ago Im seeing if I can find it but someone had profiled you know different different chain links to see how poor the performance was and it it gets really bad if I remember correctly it around like 50 or 70 chain transactions but that said Amal do you know if there is any direct positive impact to raising it because its like currently at 25 does it make sense to raise it to 3540 or is that raising it just by that much not enough yeah so like we do get some support ticket once in a while so like when if you for example try to place other food ice too much using our wallet or any kind of wallet at some point you will probably reach the the maximum chain transaction rule and you will get weird error messages that the users the users doesnt understand and theyre just angry and email support like why cant I place with those you guys what is wrong and thats because of the dice if you win you get one transaction back so you can only play it so you can only play it like 12 times and then you hit the limit so if you win 12 times which you can do with you playing on the the easiest bet then youve got to have a less user experience and we are so like I know they have that problem and we get the support tickets for its in our wallets but also we are ourselves building another on Shan dice game and we are also building out under launch pain services that were required it would help if if you can send a few chained Chris actions yeah there are some are some web wallets as well that generate a bunch of chain transaction Im thinking even though they went with SV Im thinking about money button for instance that you know every time you use the money button you chain the transaction with the previous one that you made so you get the same kind of issue that you get with solution dies here so there are a few services that would benefit from more more chain transaction yeah also like memo cash theyre saying like that to do all these workarounds to be able to send more than 25 messages per block you know per user so no they worked around it but thats kind of painful know if we remove child pays for parent the raising raising this limit I think becomes significantly easier yeah but careful parent is is somewhat I will useful feature but right now is anyone using it yeah kind of limited doesnt habits example I think the merge it like last year now without we dont okay well I think that limiting shouldnt pay for parent to some kind some death like I know or not going back to aunt level of ancestry like only the parent we could limiting the land of the children pay for foreign transaction change we could in in the me in the time that we are going to rework the code that that go through all the the confirming chain could be a temporary solution to have a significant increasing in net reduction yeah there are two problems one is strength upon the change parents effectively we can solve that one by just limiting the deaths which we do check the for current to one or two or something small the other one is ccups accounting and size accounting for blood constriction and this one require essentially what we know we did but this is one more reason why were you require to overhaul the way we do complete construction so if were able to limit the child pace repair relatively easily when it makes sense to coordinate for this fork bumping the change transaction limit up to fifty essentially doubling it without having the performance in part I guess that if weve for bu theres no process we dont have to the favor and so we are okay we even tested it with very long chain of a contraction in the gigan block testing testing that initiative and there were no so there were not among that the bottlenecks that we hit so we are ok with it but for other implementation that have shouldnt I guess that once you measure that theres no theres no performance impact once you put in place a constraint on the length of juniper for parent and rising the the at the same time that the length of a contraction in the mantel 250 why not so that we kind of calm and Satoshi dice will have the the warning the error and then whatever complains that comes through at 24 or 25 winning strikes rather than 12 so that the number of of tickets open it would be decrease significantly why not if this is the case check for parent is not gonna be the only issue the whole accounting of cigars in size for block construction get blocked complete its its gonna be an awesome one its require also arbitrary traversal of graph when you add the real transaction from the graph okay so what about gathering about a bunch of data also maybe reassuring the study that Jason was mentioning before to see if we could have a reasonable increase without hurting any other part of the system like if we achieve we said something like forty or fifty without impacting get blocked late or should I pay for padding or whatever why not lets just measure in it and then them and then decide if we have something like under five percent two percent or even zero percent were not doing it its not its not the final solution but it could be a stopgap issue proven yeah a bandaid for for the problem that we are facing yes wouldnt be against it I mean its not really a technical argument but like all those people that run into that problem we depreciate the app this is this is open source like you know yeah this is operational if someone care about some problem they need to be helping us we cannot be you know just I guess that I guess that the measuring of the wish of the the proposal change in the default policy like having an ABC client setting the default to fifty thousand and twenty five for the ancestors and for the children length of the of the chain and measuring the impact on get Brock template RPC and what whatever other so you want to make sure that you also want to generate input that are purposeful your style right because you dont want someone to be able to bring down the network by just you know sending a graph of transaction that just sent the software is you you know its you doing crazy amount of computation but isnt something that someone could destroy it already if there is some pool that use wonderful value for this parameter no because this icon thing happen in the mem tool so you could receive blocks with deeper deeper chain of transaction that not be a major issue except for block propagation because you might have it in the mem fool yeah so so like right now any pool can you hear it any way they want in mind a hundred transactions if they want to but what we discovered is that if you have nodes with different rule sets then they may get out of sync yeah what yeah but your nodes have valid you take so that other nodes doesnt have so your node is trying to send transactions and forward transactions that are seen as invalid by other nodes okay so actually is a policy that is more consensus related that we thought before like not really consensus is more like you were known might end up with tranny it doesnt exist in other notes until its included in a walk okay so so the matter is that we are going to if there is different settings in for these parameters in the minors and the in the next word that means that block propagation a transaction propagation will will be hid and so that at the end of the day there are the red we will have higher block propagation times right yes so everyone has to change the same time thats the thats the only way to fix this and the only way to do that is theyre activated at a hard work time and yeah so it seems like we have theres its not that easy maybe its not that easy just raise the limit even though that would be helpful now just erase it a little but yeah like like like everything in the Satoshi in every sort of the clients everything else needs to be rewritten and fixed yeah so I think we have a bit of a cultural issue generally with a VC edge here because everybody knows that these things need to happen but nobody is stepping to the plate the previous change to happen for op returned for instance we had we had to finish it right because we received some patches but they were not they were not highquality ma enough and we had to finish the mercifulness been the same for many other changes the thing is people who care people who care about some change they need to be stepping up to the plate and making that happen because those change they dont happen magically theyre not material I should have seen err if if nobody is doing it then its not gonna happen so in this case for it for this to happen we need to refactor worker yeah and weve known that this could needs to be refactored for like forever so we it its a big chunk but if we dont start we never finish so so like like it was discussed earlier already if we could do a stopgap to limit the depth of child pays for parent and then kind of defer some of those larger reef actors would that be acceptable yeah though we need to have very good numbers especially adversely fruit right you know not just what happened to the CPUs we just changed the config but Brian is someone tries to exploit it so under a different who did the first changed change action research into you I think was and just um okay and maybe we could reach out to him and see if youd be willing to kind of extend his research on them yeah look good I could ask him are there any other items to consider before we move on to some questions from the audience nope okay Im gonna send out the question to the panelists Ill read it to I cannot pronounce the gentlemans name ro i J i kk you he says i have a preliminary merkel eyes radix tree implementation the design is primarily a flat file Merkle tree wherein we store the tree in a series of appendonly files Id like to know if mer clicks still going forward if not for me sometime thereafter yes certainly not made oh but its stealing the plant okay anyone else like to comment on the question I guess the only thing is he says he has an implementation going I would encourage him to get in touch with developers and you know get some preliminary review start writing tests for it you know it was the only way that the development on this moves forward is like home reset by people stepping up right Jason youre you were available somebody wants to send an email get the conversation started yeah absolutely okay and if I remember correctly you are Jason cogs Jason B Cox at Bitcoin ABC org say it again please Jason B Cox at Bitcoin ABC gorg you can find our emails on the side right okay just one wheres this through get out you can find our emails that way okay I trust that that will get the ball rolling for the show do we have any other questions from anyone in the audience there are still six people that are attending as participants and if you do have any questions please forward them it looks like not do you guys have any any further conclusions that youd like to share before we end the meeting well I can ask the question and do you feel this has been a productive meeting yeah yeah yeah yeah I guess so its gonna be my comment is like I think its good just to even kind of raise our own awareness and maybe other peoples awareness about the status of things and you know what happening though yeah I think its been useful we have theres a number of things that will need to be put into a time line coming up to the upgrade in May and so it is our intention to facilitate as many meetings as is necessary top of my head right now Im taking every two weeks similar to what we did prior to the fork in November so if you guys have any comments on that or anybody in the audience has any comments please send them along I will be processing the article from this college day in the next couple of days and hopefully get it up on YouTube and variety of other sources so if you have a look and see some of the people that are working behind the scenes on their coin cash so anyone else have any further questions or comments okay thank you very much for attending look forward to chatting with you all again very soon and thanks to the attendees as well for being here and Ill bid you a fond farewell XPrize buddy youp

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *