19:00:52 #startmeeting 19:00:52 Meeting started Thu Jul 9 19:00:52 2020 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:52 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:00:52 ahoi 19:00:57 hi 19:00:58 hi 19:00:59 hi 19:01:03 hi 19:01:07 #bitcoin-core-dev Meeting: wumpus sipa gmaxwell jonasschnelli morcos luke-jr sdaftuar jtimon cfields petertodd kanzure bluematt instagibbs phantomcircuit codeshark michagogo marcofalke paveljanik NicolasDorier jl2012 achow101 meshcollider jnewbery maaku fanquake promag provoostenator aj Chris_Stewart_5 dongcarl gwillen jamesob ken281221 ryanofsky gleb moneyball kvaciral ariard digi_james amiti fjahr 19:01:09 jeremyrubin lightlike emilengler jonatack hebasto jb55 elichai2 19:01:19 hi 19:01:20 hi 19:01:22 hi 19:01:26 ih 19:01:37 hi 19:01:38 hi 19:01:50 hi 19:01:57 hola 19:02:12 hi 19:02:21 hi 19:02:28 hi 19:02:30 one proposed meeting topic: small clarification on the goals of the mempool project (jeremyrubin) 19:02:36 any last minute topics? 19:02:50 hi 19:02:56 another short one: can we drop gcc 4.8? 19:03:06 ok 19:03:25 proposed topic: new zmq notifications, or something else https://github.com/bitcoin/bitcoin/pull/19462#issuecomment-656140421 and https://github.com/bitcoin/bitcoin/pull/19476 19:03:41 sipa: not sure that's a good meeting topic; it just needs someone to investigate distros? 19:03:44 hi 19:03:54 hu 19:04:04 #topic High priority for review 19:04:16 can i haz #19386 19:04:17 #19325 pls 19:04:19 https://github.com/bitcoin/bitcoin/issues/19386 | rpc: Assert that RPCArg names are equal to CRPCCommand ones (server) by MarcoFalke · Pull Request #19386 · bitcoin/bitcoin · GitHub 19:04:20 https://github.com/bitcoin/bitcoin/issues/19325 | wallet: Refactor BerkeleyDatabase to introduce DatabaseBatch abstract class by achow101 · Pull Request #19325 · bitcoin/bitcoin · GitHub 19:04:27 https://github.com/bitcoin/bitcoin/projects/8 13 blockers, 1 bugfix, 3 chasing concept ACK 19:04:36 please don't add any more before removing any xD 19:04:38 hi 19:04:45 you can drop #18787, after talking with few people, a libstandardness would be more accurate 19:04:50 https://github.com/bitcoin/bitcoin/issues/18787 | wallet: descriptor wallet release notes and cleanups by achow101 · Pull Request #18787 · bitcoin/bitcoin · GitHub 19:05:01 #18797 sorry 19:05:04 https://github.com/bitcoin/bitcoin/issues/18797 | Export standard Script flags in bitcoinconsensus by ariard · Pull Request #18797 · bitcoin/bitcoin · GitHub 19:05:10 removing #18191 19:05:13 https://github.com/bitcoin/bitcoin/issues/18191 | Change UpdateForDescendants to use Epochs by JeremyRubin · Pull Request #18191 · bitcoin/bitcoin · GitHub 19:06:17 hi 19:06:22 please see #19033, its for 0.20.1 19:06:24 https://github.com/bitcoin/bitcoin/issues/19033 | http: Release work queue after event base finish by promag · Pull Request #19033 · bitcoin/bitcoin · GitHub 19:06:25 ariard, achow101 done 19:07:11 jeremyrubin: ok 19:07:13 would be nice to get the build tarballs fixed finally too <.< 19:07:19 hi 19:08:10 hi 19:09:20 #topic clarification on the goals of the mempool project (jeremyrubin) 19:09:49 Yeah just quick; 19:10:10 So I think that there's some confusion or at least co-mingling of concerns based on how I've presented things 19:10:35 A lot of the motivation is to reduce the memory usage in the mempool 19:10:49 and to better bound the algorithmic complexity on functions 19:11:09 So that one day, maybe, we could lift some restrictions (e.g., descendents) 19:11:42 It's less so "make mempool faster now" performance motivation. 19:12:09 ok good to know 19:12:13 In a lot of cases I think it would be fine to accept a *slower* score on some benchmark (or at least the same) when the goal of the PR is reducing memory 19:12:44 especially in cases where there may be a distant invariant which is currently protecting us from a "crash the network" style DoS 19:12:56 if those are the goals, I wonder if it might make sense to move complexity into the miner 19:13:00 we could have benchmarks that measure memory usage I suppose 19:13:08 but that could interfere with compact blocks 19:13:27 luke-jr: this does make sense for certain things. 19:13:49 I wonder if it'd be possible to support a build without mining support that performs better 19:14:12 I think one of the things that is difficult in particular is that we don't have a great set of adversarial benches for the mempool 19:14:23 And you need both whole-system and unit tests for the functions 19:14:30 I think there's something of a conflict there, miners want the block selection to be as fast as possible, while other node users would want to reduce memory usage for the mempool as much as possible 19:14:34 And both asymptotic and bounded size 19:14:37 wumpus: +1 19:15:19 wumpus: by making mining build-time-optional, we could possibly get both cheaply? 19:15:46 I'm not sure about reducing the memory usage, what the relation between size of your mempool and perf of fee estimation ? 19:15:50 I wonder if there's a way to change class sizes at runtime in C++ 19:16:03 Reducing memory usage is related to DoS primarily 19:16:06 that's kind of annoying for testing but yes 19:16:24 So the goal is to eliminate the class of vuln 19:16:42 jeremyrubin: eh? users can always adjust their mempool size 19:16:46 I don't care about performance here that much, even a 2x slower mempool with less DoS surface is probably fine 19:16:53 it doesn't help if only the miners would have the vulnerability though 19:16:55 mempool size _predictability_ is a DoS concern 19:16:55 luke-jr: no for the algorithms themselves 19:16:57 reducing mem usage would just mean more txs per MB 19:17:03 not for the mempool size itself 19:17:20 I think there is a confusion there with memory usage of caching data structure for update and overall mempool size 19:17:23 e.g. improving average memory usage on average, but worsening it under a particular edge case might constitute a vulnerability 19:17:23 Mostly fixing short-lived allocations 19:17:31 jeremyrubin: what is your expected memory usage improvement? 19:17:50 sipa: but there's no vuln with zero mempool… 19:18:32 nehan: it's a project with a bunch of algorithm improvements based on epochs for traversal instead of sets 19:18:37 (5min cap exceeded fwiw) 19:18:51 ah yeah didn't mean to turn this into an ordeal, just trying to clarify 19:19:05 not like we have any other topics? 19:19:10 perhaps just document this as a summary on the relevant PRs? 19:19:16 happy to move on if there's other things to discuss 19:19:17 unless it already is 19:19:17 luke-jr: we do, sipa had a topic 19:19:21 oh 19:19:30 nothing important 19:19:30 right 19:19:48 #topic can we drop gcc 4.8 (sipa) 19:19:51 why? 19:19:54 just something to annoy fanquake 19:19:58 :) 19:19:59 lol 19:19:59 lol 19:20:02 so, i was looking at #18086 again 19:20:05 https://github.com/bitcoin/bitcoin/issues/18086 | Accurately account for mempool index memory by sipa · Pull Request #18086 · bitcoin/bitcoin · GitHub 19:20:32 and was trying to make the accounting allocator not track copies of containers, which is a feature of the C++11 allocator infrastructure 19:20:39 I *really* think we should wait with bumping gcc versions until c++17 requirement 19:20:45 but it seems gcc 4.8 ignores it or otherwise fails to implement it correctly 19:20:46 which isn't too far away, just one major version 19:20:52 yeah 19:21:17 sipa: not the end of the world if it's wrong? 19:21:18 but i've just noticed at appveyor also fails at it :s 19:21:25 luke-jr: could cause UB 19:21:32 hmm 19:21:33 if used in totally reasonable ways 19:21:40 (but probably not in my actual PR) 19:21:46 not that I'm opposed to bumping it sooner if it's eally required for something, but it seems a waste of time to discuss minor version bumps if a big one is around the cornr :) 19:21:59 when C++20? :p 19:22:07 in 2025 19:22:11 * luke-jr found C++20 to be required for totally obvious/expected features the other day :/ 19:22:14 luke-jr: hard to talk about things that don't exist 19:22:42 luke-jr: get sipa to backport for you like Span :p 19:22:50 cfields: can't backport syntax 19:22:59 Only 3.6 months till branch off: https://github.com/bitcoin/bitcoin/issues/18947 19:23:07 struct type var = { .a = 1, .b = 2 } 19:23:16 sipa: can you #ifdef support for accounting allocators for only certain versions of gcc until we move to c++ 17? 19:23:30 jnewbery: +1! 19:23:34 jnewbery: ugh 19:23:34 luke-jr: omg, don't tease things like that 19:23:37 that's horrendous 19:23:51 aj: it's common C99, no clue why C++ forbids it :/ 19:24:09 luke-jr: juse use unnamed initializers ? 19:24:17 cfields: but the whole point is the names! 19:24:18 I mean, accurate memory accounting is nice but not critical I suppose, not enough reason to really forbid building on a platform 19:24:59 cfields: I ended up putting defaults for every member, and just setting the ones I care about after init 19:25:19 wumpus: it would mean ifdefs all over to maintain to old heuristics for every data structure, plus a accounting based one 19:25:29 https://github.com/bitcoin/bitcoin/pull/19463/files#diff-b2bb174788c7409b671c46ccc86034bdR291 19:25:33 so i guess i'd just wait instead 19:25:35 anyhow according to #16684 the minimum gcc version will go to 8.3 19:25:38 https://github.com/bitcoin/bitcoin/issues/16684 | Discussion: upgrading to C++17 · Issue #16684 · bitcoin/bitcoin · GitHub 19:25:47 or try to minimize the risk of copying 19:25:59 sipa: can you point to exactly what old gcc gets wrong? I'm just curious to see. 19:26:07 sipa: ah ok. If it's super invasive to do, then not worth it 19:26:28 cfields: have a container with an accounting allocator, encapsulated in some class 19:26:36 return a copy of it for public consumption 19:26:37 is changing this really urgent or can it wait until after 0.21? 19:26:56 now any changes to that copy need to lock the origin datastructure's accounting variable 19:27:11 8.3 sounds pretty recent; is it already a sure thing major distros will have it in their stable releases? 19:27:11 because they're shared 19:27:22 gcc 7 is enough: https://github.com/bitcoin/bitcoin/pull/19183/files#diff-0c8311709d11060c5156268e58f5f209R14 19:27:57 MarcoFalke: okay maybe I'm misreading the issue then 19:27:59 8.3 is in debian stable as the default gcc, only gcc 6 in oldstable 19:28:15 aj: RHEL tends to be the bottleneck 19:28:18 in any case there is going to be a large bump 19:28:38 sipa: Maybe rebase to see if msvc can compile it with C++17. If not, there is something else to look into first anyway. Pretty sure the 3 months will pass quickly ;) 19:28:39 luke-jr: it has software collections now so you get new gcc/clang on old rhel pretty easy 19:28:54 RHEL8 has gcc 8.2 19:28:57 aj: oh! 19:29:37 RHEL7 uses gcc 4.8 by default 19:29:53 do we care about old stables now? 19:29:53 (we've strayed a bit from the topic, but that's ok unless someone has something else) 19:30:01 https://www.softwarecollections.org/en/scls/rhscl/devtoolset-8/ -- gcc 8 for rhel 6 and 7 19:30:08 mempool delta notifications topic 19:30:13 can always cross-compile anyway 19:30:21 wumpus: that's a bit much for most users 19:30:34 #topic mempool delta notifications (instagibbs) 19:30:56 sipa: I was under the impression you weren't supposed to inherit from std::allocator in c++11. 19:31:04 ok, will look more after the meeting. 19:31:10 Ok, so recently I wrote a one-off zmq notification for mempool evictions, which are currently not covered. Other people have more exntensive ideas: https://github.com/bitcoin/bitcoin/pull/19476 19:31:18 shuckc, can you speak to motivation/usage? 19:31:19 luke-jr: maybe, it's not that much more difficult, especially nowadays with the extreme availability of VMs etc 19:31:32 cfields: ah, i can try avoiding that 19:31:43 promag also made a WIP mempool delta RPC as another possible option https://github.com/bitcoin/bitcoin/pull/19476 19:32:00 sipa: no idea if that's useful, need to spend a whole lot more time understanding what you're doing :) 19:32:53 We track a huge number of wallets, keeping mempool contents synchronised is tricky given incomplete notifications, and difficult to sychronise between api and zmq notifications 19:33:35 seems like an opportunity to cover off a lot of the edge cases in one go 19:33:48 so ideally you could getrawmempool like once, then use zmq notifications to track delta, then maybe call getrawmempool when something drops for whatever reason, and be able to figure out "where" in that notification stream the snapshot is from 19:33:54 instagibbs: you linked the same PR twice 19:33:58 oh woops 19:34:09 https://github.com/bitcoin/bitcoin/pull/19462#issueacomment-656140421 19:34:11 instagibbs: ZMQ is very unreliable.. 19:34:38 instagibbs: that makes sense 19:34:45 no, ZMQ is not very unreliable 19:34:52 sure luke-jr so alternative would likely look like promag pr i linked 19:34:56 maybe long polling 19:34:59 only in pretty extreme circumstances it sometimes drops a packet 19:35:02 ZMQ has no reliability _guarantees_ 19:35:17 and in that case there needs to be a way to resynchronize, anyway, as instagibbs says 19:35:27 wumpus: I used it for low traffic on a reliable network, and it still lost stuff regularly 19:35:28 but absent overflow conditiins, it is very reliable 19:35:32 notifications have sequence numbers to be able to detect that 19:35:38 luke-jr: hmm, ok 19:35:39 and avoiding "lots" of getrawmempools I guess is hte biggest goal 19:35:44 sipa: yes, in the general case it is very reliable 19:35:47 the biggest problem if you have to take the client offline for a bit 19:35:54 ZMQ generally work as well as any other pubsub system so long as you have the high watermark set sufficiently high, and you are not trying to consume slower than the publisher is producing. I don't see that as a particular obstancle 19:36:11 unless your client is not consuming the notifications reliably: there can't be an infinite buffer 19:36:24 (unless you'd spool to disk or something like that) 19:36:47 shuckc: zmq pub doesn't hold msg if client is offline. 19:36:51 but I don't think adding yet another notifications system with mail-like reliablity is really what we want 19:37:10 if your client goes away, you are going to have to hit getrawmempools for sure. but would like to avoid those calls in the general case as big result set (even when brief) 19:37:39 I mean there's mq systems like rabbitmq that guarantee 100% reliability 19:37:51 the approach in #19476 avoids periodic getrawmempools 19:37:53 https://github.com/bitcoin/bitcoin/issues/19476 | wip, rpc: Add mempoolchanges by promag · Pull Request #19476 · bitcoin/bitcoin · GitHub 19:38:00 why not have ZMQ log every txid it sends a notification for along with seq number. If your client detects a drop it can consult the log and query the mempool for that txid? 19:38:04 but I mean, how many of these things do you want to integrate with 19:38:07 promag, your rpc could be maybe generalized into block hash announcements too, connect/disconnect 19:38:24 jnewbery, it has as seq number 19:38:36 but right now it's a hodgepodge of endpoints 19:38:41 yes, it has a seq number 19:38:43 and missing eviction entirely 19:39:07 right, we're talking about two things here. Let's assume that your eviction PR is merged 19:39:11 all the zmq endpoints have seq numbers 19:39:31 if eviction PR is merged, the remaining issues are: 19:39:31 I'mnot sure these are actually useful because people keep complaining about this 19:39:53 wumpus: you don't know at what sequence corresponds a getrawmempool response 19:40:02 no, indeed you don't 19:40:15 I cannot know for sure if the txn hash broadcasts are adds or block removals, as they don't specify, and I can't for sure know how it lines up with the results of getrawmempool 19:40:16 for reliability, ZMQ can log seq_num:txid to file every time a notification is sent. If a client detects a missing seq_num, you consult the log and query the mempool rpc for that file 19:40:35 zmq logging to file? taht sounds really at odd with low-latency 19:40:35 *for that txid 19:40:45 wumpus: mkfifo ☺ 19:40:55 luke-jr: fifo is one to one, not one to many 19:40:56 not sure why ZMQ is involved at that point lol 19:40:58 true 19:41:05 luke-jr: if only UNIX had a one to many notification mechanism 19:41:13 except for mail 19:41:14 dbus? 19:41:27 wumpus: oh you can have many writers and many readers for one fifo just fine ;) and no guarantee which write goes to which read 19:41:29 wumpus: wall(1) :) 19:41:32 * luke-jr is not sure dbus actually has this 19:41:35 aj: lol 19:41:38 no, dbus doesn't have it either 19:41:48 dbus is one to one, it has no realible multi consumer broadcase 19:42:13 I suppose you could just use a tmpfs file 19:42:13 it's a difficult issue in general 19:42:20 because some consumer might always be lagging 19:42:21 and punch holes at the start as desired 19:42:29 this can potentially result in infinitely much storage needed 19:42:36 or store to disk and let Linux's buffers handle it 19:42:47 rabbitmq is pretty good if you really need this 19:45:10 Well aside from fixing infinite buffer problems, I think it'd be good to improve where we can. When there's a failure there's always the fallback of getrawmempool for example 19:45:11 wumpus: zmq already logs (if -logging=zmq is enabled). It just doesn't log the seq num, so it's not easy for a client to tell which messages were dropped 19:45:32 I was joking that you could also do minisketch for set reconciliation of mempool views 19:45:47 haha 19:45:54 jnewbery: I'm not sure we want to encourage software to parse debug.log … 19:45:58 zmq to keep difference small ;) 19:46:00 jnewbery: I don't think clients can ever know what message is dropped; usually, missing a sequnence number means having to resyncronize in some way (e.g.e query the entire mempool) 19:46:11 luke-jr: exactly 19:46:30 I don't think 'parse the log' is a good option, though it serves one-to-many notification perfectly 19:46:37 mq is essentially a log 19:46:43 until your disk is full 19:46:45 a dedicated, well-defined-format log might be okay 19:47:07 it's also a high latency option but that might not matter 19:47:07 but something needs to do hole-punching to clean it up before disk fills 19:47:15 wumpus: why is it high latency? 19:47:23 yes, but if you do tha, some clients might miss messages 19:47:32 luke-jr: bitcoind will already shut down when disk is full 19:47:33 :) 19:47:35 depends on who does it 19:47:35 unless they tell you they read up until that far 19:47:43 sipa: yes, but you don't want that 19:47:55 luke-jr: because disk/block devices are slow, compared to networking, latency wise 19:48:02 wumpus: Linux at least has buffers 19:48:08 even with that 19:48:11 :/ 19:48:20 the write/read won't even need to hit disk 19:48:32 * luke-jr wonders if you can tell Linux to never flush to disk unless it has to 19:48:37 per-device* 19:48:45 per-file would also be nice :P 19:49:04 yes, there's an option for that afaik, but it also means in case of power loss... 19:49:47 shuckc: it sounded like you were going to mention more issues. Was there anything else? 19:49:50 reliable delivery of messages to multiple consumers is a difficult topic 19:50:08 we need a blockchain 19:50:13 sipa: :-) 19:50:31 so i think the biggest oustanding issue(if evictions are announced and we're ok with drops once i na while) is being able to line up getrawmempool results with the notifications 19:50:32 yes, at least the blockchain always allows going back i time... well unless pruning 19:50:44 pruning is kind of the 'what if not all consumers have seen this yet' problem 19:50:45 the sequence number on the response to getrawmempool 19:51:09 obviously has backward compatibility concerns, and other suggestions? 19:51:20 shuckc: hmm? 19:51:29 wumpus: there is an option for that? what? :o 19:51:32 sipa, he wants to know where the mempool "snapshot" came from 19:51:37 does getrawmempool report a sequence number now? 19:51:41 no 19:51:45 it doesn't 19:51:49 no, I'm suggeting it should 19:51:49 and adding one would help? 19:51:52 yes 19:51:54 oh, ok 19:52:03 so you don't know if the getrawmempool result is stale or from "future" wrt zmq reports 19:52:05 what would be the reason not to? 19:52:05 because you can't tell which delta(s) have already been added 19:52:09 I guess it could prove that there were no updates in between 19:52:12 adding new fields has no backward compatibility concerns 19:52:19 sipa, I think you'd have to add *all* the zmq notification seq numbers 19:52:24 ah 19:52:27 well it's an array result ; 19:52:28 ;) 19:52:34 can bnly be added if verbose=true in getrawmempool 19:52:39 but like I said I think optional arg -> json object 19:52:45 I wonder why every zmq message has its own sequence number 19:52:54 couldn't it be just one increasing atomic? 19:52:55 unless you have one single notifier that you use for all the messages you need to sync with 19:52:56 wumpus, it's a local member of hte notification, for whatever reason 19:53:09 wumpus: a client knows if he missed anything 19:53:18 promag: oh, true 19:53:18 it's only a mess if need to track lots of notifiers 19:53:32 yes, makes sense, a client is generally interested in only a subset of message kinds 19:53:35 shuckc, shouldn't be too bad, you just grab the one you care about 19:53:45 so a global sequence number would be useless 19:54:11 wumpus: not really, that number should be exposed in both rpc response and zmq message 19:54:18 in any case getmempool would only need the mempool related numbers... 19:54:26 but I'm not fond of that.. 19:54:42 it seems like some kind of layer violation 19:54:46 having RPC query ZMQ 19:54:54 because as a client, you need to use rpc AND zmq 19:54:59 yes 19:55:02 otherwise you need to somehow ask for unique snapshot 19:55:11 hence my draft PR 19:55:11 of the mempool 19:55:34 but yes, it's a light violation at least 19:55:40 shuckc, zmq can and will silently drop messages, unless you have sequence numbers in the application layer you cannot detect that 19:55:44 My suggestion includes connectblock and disconnect block notifications on the same new channel, because they allow you to keep your local mempool up to date and equally you need to know where in the stream of deltas they arrived 19:56:00 of course, that could be worked around by having both RPC and ZMQ query another source for sequence numbers 19:56:03 how about if the mempool itself had a seq number that is incremented on every add/remove? 19:56:09 right 19:56:19 jnewbery, that's promag's pr pretty much-ish 19:56:20 I think that would make sense 19:57:08 3 minutes 19:57:25 so +1 on jnewbery/promag's idea then 19:57:44 with promags suggestion, bitcoind has to keep state/buffer for each consumer, the zmq model makes it state-less, and promag you also need to poll for new messages which is something of a step backwatfs 19:58:05 I didn't understand it like that 19:58:09 note that in my PR, the "stream" will be upper bounded in size, so no OOM concerns 19:58:10 shuckc: not if he adds longpolling 19:58:15 the mempool itself would keep the seq number 19:58:24 not per consumer 19:58:29 shuckc: ill add long poll 19:58:36 it's kind of strange if different consumers have differnt seq ids 19:58:41 do any other commands use long polling? 19:58:45 because this is global state we're exposing 19:58:46 GBT 19:58:46 wumpus, if zmq is using a global sequence number for all messages, i'd suggest just adding that to rpc as a header or something 19:58:46 shuckc: yes 19:58:51 getblocktemplate <--- GBT 19:59:12 I don't like longpoll that much tbh, at least in json rpc 19:59:33 especially because of libevent, timeouts etcetc 19:59:51 but "poll and wait up to n secs" if fine imo 19:59:52 I don't think adding a different notification mechanism will really solve the 'clients could stop consuming and keep behind' problem 20:00:02 promag: same thing? :P 20:00:03 it would mean accumulating in memory in that case? 20:00:20 luke-jr: n secs < timeoud (: 20:00:26 wumpus: yes 20:00:31 I guess I am the hammer that sees nails everywhere, but how about a hash (muhash) for the mempool states instead of a sequence number? but not sure i grasp the problem completely yet... 20:00:33 but thats's fine 20:00:50 fjahr: then you need to log the hash.. 20:00:55 if the client doensn't pull the stream, the stream will be terminated 20:00:56 no, I don't think that's fine, if there's no limit a client could forget to connect and fill your memory entirely 20:01:00 fjahr, even more ridiculous than my minisketch idea ;P 20:01:10 promag: that's just another "lose notifications" then 20:01:17 hehe 20:01:41 wumpus: it's one thing to begin dropping stuff after N minutes of downtime; another to lose them randomly as a normal event 20:01:47 reliable notification is really a non-trivial issue :) 20:01:49 wumpus: no, the stream will be terminated, the client starts over and gets a fresh stream 20:01:57 in any case, it's time 20:02:09 instagibbs: (minisketch is a great idea!) 20:02:11 #endmeeting