dkubb changed the topic of #datamapper to: Datamapper v1.2.0 | Mailing List: http://is.gd/aa9D | Logs: http://is.gd/qWAL7V | DataMapper 2 Renamed to ROM, see #rom-rb for development
ckrailo has quit [Quit: Computer has gone to sleep.]
lnormous has quit [Remote host closed the connection]
mbj has quit [Quit: leaving]
dkubb has quit [Read error: Connection reset by peer]
ckrailo has joined #datamapper
Sylvain2 has quit [Ping timeout: 248 seconds]
dkubb has joined #datamapper
mralk3 has quit [Remote host closed the connection]
kurko__ has joined #datamapper
ckrailo has quit [Ping timeout: 246 seconds]
ckrailo has joined #datamapper
kurko__ has quit [Ping timeout: 264 seconds]
kurko__ has joined #datamapper
v0n has quit [Ping timeout: 276 seconds]
rsim has joined #datamapper
rsim has quit [Ping timeout: 240 seconds]
snusnu has quit [Quit: Leaving.]
Sylvain1 has joined #datamapper
kurko__ has quit [Quit: Computer has gone to sleep.]
zombor has joined #datamapper
rsim has joined #datamapper
rsim has quit [Client Quit]
zombor has quit [Remote host closed the connection]
rsim has joined #datamapper
rsim has quit [Client Quit]
<dkubb> !memo mbj I think rubocop's dep on an old version of parser is blocking mutant from being upgraded because mutant specifies a new version of parser
<Cinchy> dkubb: Memo recorded for mbj.
northrup has joined #datamapper
<dkubb> !memo solnic ok, so this is working now: https://gist.github.com/dkubb/6031208 ... please try it out and lmk if the interface is what you and snusnu expected it to be
<Cinchy> dkubb: Memo recorded for solnic.
<Cinchy> [gist] Examples for axiom-memory-adapter (at gist.github.com, dkubb on 2013-07-27 06:40)
rsim has joined #datamapper
rsim has quit [Quit: Leaving.]
mbj has joined #datamapper
<mbj> .
<Cinchy> mbj: [3h 27m 3s ago] <dkubb> I think rubocop's dep on an old version of parser is blocking mutant from being upgraded because mutant specifies a new version of parser
<mbj> dkubb: yeah
<mbj> dkubb: we have to use latest git version of rubocop
rsim has joined #datamapper
v0n has joined #datamapper
mbj has quit [Ping timeout: 248 seconds]
mbj has joined #datamapper
postmodern has quit [Quit: Leaving]
v0n has quit [Ping timeout: 240 seconds]
mbj has quit [Read error: Connection reset by peer]
DireFog_ has joined #datamapper
DireFog has quit [Quit: No Ping reply in 180 seconds.]
<dkubb> !memo mbj yeah, unfortunately it has a hard dep on parser 2.0 pre2: https://github.com/bbatsov/rubocop/blob/master/rubocop.gemspec#L30
<Cinchy> dkubb: Memo recorded for mbj.
snusnu has joined #datamapper
<dkubb> !memo mbj fwiw, I submitted an issue to the project: https://github.com/bbatsov/rubocop/issues/397 .. if I get a chance, and they don't fix it soon, I'll submit a PR with the version change
<Cinchy> dkubb: Memo recorded for mbj.
zombor has joined #datamapper
DireFog_ is now known as DireFog
<dkubb> !memo mbj do you think it would be possible to add a mutation like: !!expression -> expression ... I would like to ensure that the tests for this kind of code actually has an expression that is non-boolean. hints are appreciated
<Cinchy> dkubb: Memo recorded for mbj.
rsim has quit [Quit: Leaving.]
mbj has joined #datamapper
<mbj> .
<Cinchy> mbj: [2h 50m 51s ago] <dkubb> yeah, unfortunately it has a hard dep on parser 2.0 pre2: https://github.com/bbatsov/rubocop/blob/master/rubocop.gemspec#L30
<Cinchy> mbj: [2h 26m 28s ago] <dkubb> fwiw, I submitted an issue to the project: https://github.com/bbatsov/rubocop/issues/397 .. if I get a chance, and they don't fix it soon, I'll submit a PR with the version change
<Cinchy> mbj: [20m 6s ago] <dkubb> do you think it would be possible to add a mutation like: !!expression -> expression ... I would like to ensure that the tests for this kind of code actually has an expression that is non-boolean. hints are appreciated
<mbj> dkubb: yeah adding such a mutation is easy
<mbj> dkubb: Open an issue and I'll add it today
rsim has joined #datamapper
rsim has quit [Ping timeout: 240 seconds]
<dkubb> mbj: it looks like rubocop was updated today too
v0n has joined #datamapper
<mbj> dkubb: nice
<mbj> dkubb: already working on double negation elimination
<mbj> but we have a barbecue here, back in hours :D
<dkubb> oh awesome, thanks
<dkubb> mbj: I'll just add mutations to the issue tracker as I think of them, if that's alright with you
<dkubb> I have on expectation that you'll work on them. I'm adding them as much for you as to remind myself. otoh I won't complain if you take any of them ;)
<dkubb> *no expectation
<mbj> hehe
<mbj> just add them under the "mutation" label
<mbj> I loose track of the good ones very often also.
rsim has joined #datamapper
zombor has quit [Remote host closed the connection]
v0n has quit [Read error: Connection reset by peer]
v0n has joined #datamapper
rsim has quit [Quit: Leaving.]
snusnu has quit [Quit: Leaving.]
rfizzle has joined #datamapper
<rfizzle> Datamapper seems to only be inserting 50 items at a time. MySQL database. Any thoughts?
<rfizzle> 50 items a second I mean
<rfizzle> I'm trying to insert 50,000 items....
dkubb|away has joined #datamapper
dkubb|away is now known as dkubb1
dkubb has quit [Ping timeout: 245 seconds]
dkubb1 is now known as dkubb
<rfizzle> No one?
<dkubb> rfizzle: are you using .save or .save! .. the former will run validations, which may not be necessary if you know the data to be valid
<dkubb> rfizzle: you should also look at logging out the sql queries to see if it's doing extra work
<dkubb> rfizzle: you can do this before doing DataMapper.setup to see all the sql queries: DataMapper::Logger.new($stdout, :debug)
<rfizzle> using .create!
<dkubb> hmm, yeah, that'll be skipping validations
<dkubb> rfizzle: you could try parallelizing it a bit using https://github.com/grosser/parallel .. use Parallel.each to iterate over your parsed list of items, and insert multiple at a time
<dkubb> rfizzle: barring that, you can always do: repository.adapter.execute(sql_query, *bind_values) if you want to bypass DM completely and just write to the db
<rfizzle> Yea, the thing is I'm getting an average of like 20 a second. That's extraordinarily slow. I'm using associations if that makes a difference
<dkubb> yeah, that sounds really slow
<dkubb> that's like 1 every 3 seconds
<dkubb> fwiw, usually I am seeing creates in the sub 50ms range most of the time
<dkubb> or sub 100ms if it's a bit more complex
<namelessjon> 20 a second, 50 ms a pop sounds about right.
<dkubb> oh
<dkubb> sorry, I read that at 20 a minute ;)
<dkubb> you should also check the mysql slow query log to see if there's anything slow going on
<namelessjon> Also, would a transaction help?
<dkubb> I don't know if I would ever use ruby to do batch loading of data, unless I couldn't trust it and needed to do validation or normalization on the data
<rfizzle> I'm basically parsing and inserting every line of a text file into a database field.
<rfizzle> But I need a way to insert like 50,000 in a timely manner.
<dkubb> what about converting the text to csv, and then using mysql's native import?
<dkubb> I guess it also depends on how many times you're going to be doing it
<dkubb> if it's a one-off I'd approach it completely differently from something where an app had to accept a file upload from a user and import it into a db
<dkubb> (for example)
<rfizzle> that's what I do. accept a file
<dkubb> is it a csv?
<rfizzle> no, just .txt. doing some testing and INSERTs are taking 0.280261s
<dkubb> is that what the logs say? mysql is taking 0.28 seconds to perform an INSERT?
<mbj> rfizzle: using transactions?
<dkubb> that's 200ms, that's *really* slow for mysql
<dkubb> obviously it varies based on the schema
<mbj> rfizzle: require 'dm-transactions'; YourModel.transaction { your_work }
<mbj> rfizzle: how far is your server away from client, in millisecs?
<mbj> If your insert takes 1ms serverside, but 100ms in sending packets around you have to parralelize.
<dkubb> I was using RDS with Amazon and i don't think it was that slow
<rfizzle> not far. Amazon EC2 and Amazon RDS
<dkubb> oh, heh
<rfizzle> both east coast.
<mbj> rfizzle: can you ping the machine?
<mbj> Or report a latency problem, AFAIK Amazon uses "shared hosting" with RDS.
<rfizzle> I can't ping the RDS...
<mbj> rfizzle: There is a ping like statement in postgresl.
<mbj> at the postgres protocol level.
<rfizzle> using MySQL
<mbj> sorry :D
v0n has quit [Quit: WeeChat 0.4.1]
v0n has joined #datamapper
<rfizzle> So I removed the transaction and now it inserts at around .01 and .02
<dkubb> there's going to be *some* overhead inserting records in ruby, but usually (by far) the bottleneck is IO
<dkubb> so it's 1/10th of the time now?
<rfizzle> yea
<dkubb> so now it'll be 200 p/minute
<dkubb> I assume you're doing this in a background process, like you're not making the user wait while your web framework carries out this work?
<dkubb> er 200 p/second
<dkubb> I guess less than that actually
<rfizzle> right. SuckerPunch
<dkubb> are you able to increase the number of workers? DM should be thread safe
<dkubb> you could try, say, 4 workers at once
<dkubb> RDS can handle an amazing number of concurrent writes
<dkubb> I would seriously doubt you could max it out from within ruby, even with raw sql queries
<rfizzle> Alright. I will try that out. Thanks.
Sylvain1 has quit [Quit: Leaving.]
rafaelfranca has joined #datamapper
rafaelfr_ has joined #datamapper
rafaelfranca has quit [Ping timeout: 248 seconds]
rafaelfr_ has quit [Read error: Connection reset by peer]
rafaelfranca has joined #datamapper
rafaelfranca has quit [Read error: Connection reset by peer]
rafaelfranca has joined #datamapper
rafaelfranca has quit [Read error: Connection reset by peer]
rafaelfranca has joined #datamapper
rafaelfranca has quit [Read error: Connection reset by peer]
rafaelfranca has joined #datamapper
rsim has joined #datamapper
rsim has quit [Ping timeout: 240 seconds]
mbj has quit [Quit: leaving]
mbj has joined #datamapper
v0n has quit [Quit: WeeChat 0.4.1]
kurko__ has joined #datamapper
v0n has joined #datamapper
rafaelfranca has quit [Read error: Connection reset by peer]
rafaelfranca has joined #datamapper
jeremyevans has quit [Ping timeout: 260 seconds]
rfizzle has quit [Quit: Leaving]
mbj has quit [Quit: leaving]
postmodern has joined #datamapper
rafaelfranca has quit [Remote host closed the connection]
true_droid has joined #datamapper
true_droid has left #datamapper [#datamapper]
northrup has quit [Ping timeout: 248 seconds]
jeremyevans has joined #datamapper
zombor has joined #datamapper
v0n has quit [Ping timeout: 248 seconds]
v0n has joined #datamapper
rsim has joined #datamapper
rsim has quit [Ping timeout: 240 seconds]