Hi
I’m having a look to SGS to check if it is able to easily handle large amounts of server-side data. Related to this I’m wondering a number of things. If someone can help me out with them … ;D
- The server tutorial mentions data that wasn’t removed from the object store will not be collected. (Page 3, last paragraph.) Does this mean the total amount of data in the object store is limited to the JVM’s heap size ? Or can it handle anything fitting on the local file system with caching/pooling ? If not, would something like JPA (or even Java EE) be a solution ?
- I would like to implement a kind of Observer pattern where clients subscribe to changes on the server-side model. To do this it would be very handy to be able to catch changes of managed objects - in order to notify the clients a particular object has been modified. Is there a usual design for this in SGS ? Presumably these notifications must be using the server/client messaging mechanism. Correct ?
- I read in this forum the max message size is 64K for messages sent to the clients. Is this also the case for messages sent to the server ?
- Looking for answers I saw a topic about initial model upload to the clients. As my model is way too big to send regularly completely to the clients, I also though about an initial load. Making it only slow at login time. But I would really like to avoid an additional server-side setup, configuration and code development to allow HTTP too. Breaking-up the data into 64K pieces is not difficult. Alas, the resulting number of network requests alone will impact performance. Any suggestion ?
Many Thanks for helping me !
Jan