[00:03:34] *** Joins: drbean (~drbean@TC210-63-209-180.static.apol.com.tw) [00:04:25] *** Quits: drbean_ (~drbean@TC210-63-209-205.static.apol.com.tw) (Read error: Connection reset by peer) [01:07:57] *** Joins: esg (~emil@esg.xen.prgmr.com) [16:07:21] *** Quits: drbean (~drbean@TC210-63-209-180.static.apol.com.tw) (Read error: Connection reset by peer) [23:36:48] *** Joins: e3928a3bc (b3d22ab9@gateway/web/freenode/ip.179.210.42.185) [23:37:14] hello there! [23:42:25] I feel like this is a stupid question, but how do I parse sentences from a file in such a way that I can easily see which ones fail? [23:42:53] using -tr doesn't help, because I first get all the strings, and then all the trees [23:43:41] and parse | l -treebank will hide the errors from the output [23:44:31] I've tried several things like rf -lines -file=sents -tr | p -lang=Eng [23:48:19] i used a gf script with one parse command per line and sentence [23:50:49] hum [23:50:54] how? [23:51:28] I guess I can also do a diff in the linearized results.. [23:59:23] i started with a file with one sentence per line and using some perl magic i got every line in the shape 'p -tr "test sentence" | l -tr -treebank' [23:59:48] and then you can run 'gf < testfile' to test all the sentences in the file