And yet we have more surprises! 
I run my previous test with the vm argument : -verbose:gc
Result with object pooling :
[GC 49216K->10758K(188352K), 0.0076902 secs]
[GC 59974K->12750K(237568K), 0.0070728 secs]
[GC 111182K->14302K(237568K), 0.0070655 secs]
[GC 112734K->15478K(336000K), 0.0067169 secs]
[GC 212342K->16430K(336000K), 0.0067652 secs]
[GC 213294K->17246K(535872K), 0.0063414 secs]
[GC 410974K->17214K(535872K), 0.0072485 secs]
[GC 410942K->17246K(929728K), 0.0069668 secs]
Time elapsed in nano : 918742224
Time elapse in milli : 918
Result without object pooling
[GC 49216K->256K(188352K), 0.0007276 secs]
[GC 49472K->192K(188352K), 0.0005208 secs]
[GC 49408K->256K(188352K), 0.0004196 secs]
[GC 49472K->160K(237568K), 0.0003678 secs]
[GC 98592K->208K(237568K), 0.0004676 secs]
[GC 98640K->224K(328192K), 0.0005211 secs]
[GC 197088K->176K(328192K), 0.0008081 secs]
[GC 197040K->176K(320064K), 0.0004403 secs]
[GC 189168K->176K(312896K), 0.0002397 secs]
[GC 181680K->176K(305472K), 0.0003220 secs]
[GC 174576K->176K(299008K), 0.0002982 secs]
Time elapsed in nano : 654859105
Time elapse in milli : 654
Conclusion : The pause with object pooling are usually 10 to 20 times longer and the throughput is a lot worse!