sagax has quit [Remote host closed the connection]
sagax has joined #jruby
ur5us has quit [Ping timeout: 252 seconds]
joni_pv[m] has joined #jruby
<joni_pv[m]>
not sure if this is jruby related, but: any idea why when warbled rails app and run it inside tomcat it could not require sassc. app is working now when sass-rails and sassc is in development group. this is ok, but i dont't understand why sassc could not be required
shellac has joined #jruby
kasaltie[m] is now known as kasaltie
kasaltie has quit [Quit: authenticating]
kasaltie has joined #jruby
kasaltie has quit [Quit: authenticating]
kasaltie has joined #jruby
lucasb has joined #jruby
xardion has quit [Remote host closed the connection]
xardion has joined #jruby
dhoc has joined #jruby
shellac has quit [Quit: Computer has gone to sleep.]
subbu is now known as subbu|lunch
subbu|lunch is now known as subbu
<headius[m]>
yo yo yo
<headius[m]>
lopex: how goes? I'm doing one more pass over failures
<enebo[m]>
Assuming my latest fix is totally golden I am switching to begin;else;end
<enebo[m]>
With that one done we will be green on ci
<enebo[m]>
I think
<headius[m]>
ok
<headius[m]>
yeah specs only need that one to be green
<headius[m]>
there's like 60 other fails across the MRI suites and I'm going to mop those up
<enebo[m]>
yep finished 1 more
<headius[m]>
have you pushed anything?
<enebo[m]>
yes
<headius[m]>
sweet
<enebo[m]>
that was how I knew it had one more
<enebo[m]>
I mean I did locally too but it was not real until travis said so
<headius[m]>
I'll try to fix a few of these bigdecimal items
<headius[m]>
ok these aren't terrible
<enebo[m]>
lol travis allocated machines seems to be lower than normal
<headius[m]>
slow getting jobs going?
<enebo[m]>
5?
<headius[m]>
they are probably slowly tightening that belt
<headius[m]>
heh yeah that's way lower than we usually get
<enebo[m]>
I see only 5 tests running at a time but see nothing else running
<headius[m]>
time to start migrating more to GH actions
<enebo[m]>
I really don't know how long these first jobs take but I believe they were the longer ones because all the others would finish from extra workers
<enebo[m]>
I was going to wonder if we are even getting the same sized instances but the timings are not so dfifferent for total time taken
<enebo[m]>
I feel like we went from 8 to 5 workers
<headius[m]>
I know we had at least ten
<headius[m]>
maybe more
<enebo[m]>
ok false alarm...there is something seriously wrong with the front end of travis...After doing like 10 reloads over the last 20 minutes this final reload and the entire build went from 5 workers to just being totally done
<headius[m]>
heh when is there not something seriously wrong with the front-end of travis