19:00:15 <wumpus> #startmeeting
19:00:15 <lightningbot> Meeting started Thu May 30 19:00:15 2019 UTC.  The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:15 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:00:20 <moneyball> hi!
19:00:23 <instagibbs> hi
19:00:25 <jamesob> hi
19:00:31 <meshcollider> hi
19:00:33 <moneyball> https://gist.github.com/moneyball/071d608fdae217c2a6d7c35955881d8a
19:00:38 <promag> hi
19:00:44 <jnewbery> hi
19:00:47 <jarthur> 'lo
19:00:50 <fanquake> hi
19:01:18 <wumpus> #bitcoin-core-dev Meeting: wumpus sipa gmaxwell jonasschnelli morcos luke-jr sdaftuar jtimon cfields petertodd kanzure bluematt instagibbs phantomcircuit codeshark michagogo marcofalke paveljanik NicolasDorier jl2012 achow101 meshcollider jnewbery maaku fanquake promag provoostenator aj Chris_Stewart_5 dongcarl gwillen jamesob ken281221 ryanofsky gleb moneyball kvaciral
19:02:08 <wumpus> any proposed topics? (in addition to moneyball's) ?
19:02:41 <fanquake> Probably any suggestions for short talks at Core Dev? Is there a list of anything like that already?
19:02:49 <instagibbs> kanzure, ^
19:03:31 <fanquake> If so that's fine, let's discuss the other topics.
19:03:58 <moneyball> fanquake be sure to provide kanzure with any topics you'd like to discuss at CoreDev!
19:03:58 <sipa> about to get to the airport, i think we'll have better meeting time in amsterdam
19:04:24 <promag> jarthur:
19:04:25 <wumpus> #topic High priority for review
19:04:29 <promag> (ops)
19:04:48 <wumpus> 5 things on the list now https://github.com/bitcoin/bitcoin/projects/8
19:05:19 <promag> sorry, what is the date for 0.18.1?
19:05:23 <wumpus> anything to add/remove ?
19:05:31 <wumpus> promag: whaa? is 0.18.1 planned at all?
19:05:46 <wumpus> I don't think anything made it necessary yet
19:06:01 <achow101> hi
19:06:10 <fanquake> https://github.com/bitcoin/bitcoin/pulls?q=is%3Aopen+is%3Apr+milestone%3A0.18.1
19:06:19 <promag> also https://github.com/bitcoin/bitcoin/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3A%22Needs+backport%22+
19:06:46 <achow101> #15450 for hi prio
19:06:49 <gribble> https://github.com/bitcoin/bitcoin/issues/15450 | [GUI] Create wallet menu option by achow101 · Pull Request #15450 · bitcoin/bitcoin · GitHub
19:07:04 <promag> but nevermind, I got it's too soon
19:07:12 <fanquake> added
19:07:55 <wumpus> absent any really important bug fix I think it's too soon, yes, we don't tend to do a lot of minor releases usually
19:08:23 <fanquake> There's a call for some more review on #15976, has 4 utACKs at the moment (already high-prio).
19:08:26 <gribble> https://github.com/bitcoin/bitcoin/issues/15976 | refactor: move methods under CChainState (pt. 1) by jamesob · Pull Request #15976 · bitcoin/bitcoin · GitHub
19:09:52 <wumpus> okay!
19:10:45 <wumpus> #topic Has travis got worse? (jnewbery)
19:10:54 <wumpus> "has travis got worse or have we just added so many builds to our job that it times out?"
19:11:16 <wumpus> I've wondered this, too, travis has been more unreliable on (PRs at least) than it used to be
19:11:22 <jnewbery> In the last couple of months, *a lot* of travis builds time out
19:11:29 <wumpus> while I don't notice this for local tests
19:11:31 <jamesob> hasn't seemed any worse to me recently, though we've had to rekick certain jobs for months
19:11:41 <jnewbery> I don't know if our builds got slower, travis got slower, or we just added to many jobs for travis to handle
19:11:51 <achow101> a lot of things have been added recently, maybe it's too much for them to handle?
19:11:57 <wumpus> at least we should be careful with making the tests even longer now
19:12:23 <fanquake> Also a lot of depends related PRs timing out recently, but not much that can be done about that.
19:12:24 <instagibbs> There is an element of how Travis is feeling that day
19:12:30 <instagibbs> lots of variance in build times
19:12:47 <wumpus> right, it's very possible that it's not our fault entirely though and it's simply the infrastructure becoming worse
19:13:11 <jnewbery> There are currently 10 different test stages. I know it used to be 6 or 7
19:13:20 <wumpus> I haven't noticed the tests nor builds becoming noticably slower locally
19:13:59 <wumpus> jnewbery: hmm might be time to evaluate whether they're really all contributing something useful
19:14:55 <instagibbs> in elements land we've been having weird issues too that might reflect travis being overly taxed, hard to say
19:15:06 <instagibbs> failure to write to files, that kind of stuff
19:15:27 <promag> best case I usually see is around 20min (for longest job 8)
19:15:36 <jnewbery> I know it runs those test stages in parallel
19:15:38 <wumpus> yes weird stuff happens but I don't think we have that often, it's mostly just timeouts
19:15:40 <luke-jr> instagibbs: does the Elements Travis have caching enabled?
19:15:46 <ryanofsky> are people seeing travis errors other than "Error! Initial build successful..."? This is the only travis error i see and restarting fixes it 100% of the time
19:16:04 <jnewbery> ryanofsky: yes, that's the error
19:16:22 <luke-jr> ryanofsky: I've seen cases where restarting *doesn't* fix it
19:16:22 <instagibbs> ryanofsky, that's when dpeends takes "too long"
19:16:22 <promag> ryanofsky: sometimes I see others and I leave a comment in the PR (before restarting) or create an issue
19:16:24 <instagibbs> and it early exits
19:16:32 <luke-jr> ryanofsky: but they mysteriously resolved before I could troubleshoot :/
19:16:45 <instagibbs> luke-jr, I believe so, the restarting thing fixes  that issue
19:17:08 <jnewbery> The longest running test stage is "sanitizers: address/leak (ASan + LSan) + undefined (UBSan) + integer". I wonder if the same hardware is shared between different test stages and whatever is running at the same time as that one might time out
19:17:10 <wumpus> it *should* be fixed by restarting, that's the point of that message, it's an ugly hack though
19:17:44 <jnewbery> yes, travis is supposed to save developer time, not add an extra step to opening a PR!
19:18:21 <luke-jr> jnewbery: on that note, it's annoying that AppVeyor doesn't use standard build tools, and requires duplicating changes for it
19:18:56 <meshcollider> Is it possible for a new drahtbot feature to auto-restart builds with that error?
19:19:11 <achow101> apparently travis switched from docker containers on ec2 to vms on gce late last year, maybe that's related?
19:19:47 <jnewbery> has anyone raised this with travis? We have a paid account, right? Can we try to get support?
19:19:48 <promag> does it run multiple "build depends" with the same conf if needed? sounds unnecessary?
19:21:06 <luke-jr> jnewbery: Travis support is pretty useless in my experience :/
19:21:26 <luke-jr> jnewbery: I expect we'd at the very least need a very specific concern
19:21:47 <jnewbery> luke-jr: support is pretty useless in my experience :/
19:21:50 <wumpus> yes, filing an issue 'it is slow' probably won't do much, I'm sure they get that a lot
19:21:57 <jamesob> circleci (https://circleci.com/) execution is very good in my experience but I am definitely not volunteering to migrate our .travis.yml :)
19:22:22 <wumpus> migrating to another CI would definitely be an option if it's better
19:22:32 <meshcollider> jnewbery: Travis is free for open source projects, we don't pay
19:22:36 <wumpus> (and then I really mean migrating not adding another one)
19:22:48 <luke-jr> meshcollider: we do have a paid account, though
19:22:56 <meshcollider> Oh?
19:23:09 <luke-jr> meshcollider: afaik the Bitcoin Foundation set it up way back when
19:23:09 <jnewbery> well, what's the issue exactly? There's some specific job timout on travis, and so we cancel the build before that timeout to cache whatver has been built already? Can we ask them to increase the job timeout for us?
19:23:27 <jnewbery> I believe we have a paid account so we get more parallel builds
19:23:38 <jnewbery> because we were getting a backlog of PR builds a couple of years ago
19:23:48 <luke-jr> jnewbery: it used to also be required for caches (though I'm not sure if they expanded that to free accounts maybe)
19:23:52 <jamesob> meshcollider: Chaincode starting kicking in for Travis a year and change ago
19:24:02 <jamesob> *started
19:24:51 <wumpus> but it didn't use to run into the timeout so often, so it's become slower, that's really the issue, not the timeout itself; increasing the timeout would work, up to a point, but doing that indefinitely makes the testing slower and slower which isn't good either
19:26:15 <wumpus> are we somehow doing more travis builds than before? e.g. how often does drahtbot re-kick builds?
19:26:21 <jnewbery> yeah, someone needs to investigate where the slowness comes from. Is there an API to pull down all the build results from Travis for a project so we can at least get a sense for how often things are failing?
19:26:35 <wumpus> yes, travis has a quite extensive API
19:26:37 <luke-jr> jnewbery: there's certainly an API
19:26:48 <jnewbery> One issue is that restarting a build makes the logs for the failed build unavailable (at least on the website)
19:26:49 <luke-jr> (including one that lets you SSH into the VM)
19:27:36 <wumpus> jnewbery: I don't know if that's the case for the API, or the new log will get a new id
19:27:52 <luke-jr> wumpus: pretty sure it overwrites the existing log
19:28:09 <jamesob> I think there's some amdahl's law at work here though - is speeding up travis really going to make the process materially faster? we're pretty review-bound
19:28:13 <wumpus> ah yes I'm sure too--this is the URL: https://api.travis-ci.org/v3/job/$1/log.txt   it's per job, and it will have the same job id
19:28:35 <wumpus> jamesob: *making it fail less* will lower frustration
19:28:52 <jamesob> wumpus: yeah, agree spurious failures are annoying
19:29:17 <jnewbery> yeah, it's not about how long it takes, it's that if you open a PR, most of the time you'll need to log back in an hour later or more to rekick travis
19:29:18 <wumpus> it's really frustrating if tests fail randomly, if that happens too often people take them less seriously which means actual problems might be ignored
19:29:32 <luke-jr> maybe a flag to tell Travis "updated caches, please restart"?
19:29:40 * luke-jr ponders if the Travis job can call the API to restart itself
19:30:18 <jnewbery> wumpus: exactly that - inconsistently failing builds/tests lower confidence in the product/tests and hide real problems
19:30:19 <luke-jr> wumpus: well, false failures on tests is another issue IMO
19:31:27 <promag> luke-jr: it needs an auth token, not sure if there's a way to set secrets in travis
19:31:45 <jnewbery> luke-jr: that'd be nice, or to get drahtbot to do it, but this is just a variation on the increase timeouts and kick the can down the road, no?
19:32:01 <luke-jr> promag: there is, but it may be a dead idea since the job would still be running :/
19:32:21 <luke-jr> jnewbery: dunno, it might be nice to restart after cache updates regardless
19:32:35 <luke-jr> just to make it more deterministic for the actual tests
19:33:28 <wumpus> #topic GitHub feedback issue (moneyball)
19:33:32 <fanquake> Should we move onto another topic? Seems like the conclusion here is to go away adn investigate Travis/bot/other CI options and discuss in AMS?
19:33:34 <moneyball> hi!
19:33:48 <moneyball> so we have this issue https://github.com/bitcoin/bitcoin/issues/15847
19:33:49 <wumpus> ""please review GitHub feedback issue and vote with thumbs up on issues you think you make the top 10 https://github.com/bitcoin/bitcoin/issues/15847""
19:33:52 <moneyball> with many responses - thank you!
19:34:19 <moneyball> i would like to suggest folks take a look at the issue and "vote" for the ideas that resonate with you by doing a thumbs up
19:34:46 <fanquake> moneyball is there any timeline for when that meetup might happen?
19:34:51 <moneyball> this will help us figure out a top 10 for the CEO lunch, which is now scheduled for june 21, and sipa, cfields, and i plan to attend
19:35:20 <moneyball> we can also review this next week at CoreDev if there is interest
19:35:58 <moneyball> that's all i had to say, unless there are any other questions right now
19:36:20 <sipa> i think it's most useful to talk about issues with are mostly unique to our usage of github
19:36:37 <wumpus> thank you, yes this was expected to be a fairly short topic
19:36:42 <sipa> as i expect that there are a lot of grievances that are mostly shared between projects
19:36:46 <luke-jr> hmm, it'd be nice to use GitHub's code review stuff but also verify the diff against a local git instance somehow
19:37:05 <sipa> probably best to discuss on the issue tracker
19:37:09 <luke-jr> sometimes I review on GitHub, and therefore leave a commithash out of ACKs because I don't know for sure the commithash is what I reviewed
19:37:10 <moneyball> luke-jr please make sure to add your ideas to the github issue
19:37:23 <luke-jr> right, but I don't know if this is even a viable issue to address :p
19:39:09 <wumpus> ok, any other topics?
19:40:24 <wumpus> thanks everyone, see (some of) you next week in Amsterdam
19:40:32 <wumpus> #endmeeting