tag:blogger.com,1999:blog-5222542250352397862.post3185683537973373653..comments2024-03-02T02:29:18.115-08:00Comments on Stuff Gil Says: What sort of allocation rates can servers handle?Gil Tenehttp://www.blogger.com/profile/10732691137498021997noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-5222542250352397862.post-13659617690345943332022-07-10T23:28:33.340-07:002022-07-10T23:28:33.340-07:00v257s7fjijz677 cheap sex toys,cheap sex toys,dildo...v257s7fjijz677 <a href="https://www.xlovemeta.com/" rel="nofollow"><strong>cheap sex toys</strong></a>,<a href="https://www.discountsextoysprices.com/" rel="nofollow"><strong>cheap sex toys</strong></a>,<a href="https://www.discountsextoysprices.com/" rel="nofollow"><strong>dildo</strong></a>,<a href="https://www.my1to10.com/" rel="nofollow"><strong>sex chair</strong></a>,<a href="https://www.digglove.com/waterproof-realistic-dog-penis-dildo-241.html" rel="nofollow"><strong>wolf dildo</strong></a>,<a href="https://www.sextoys-vibros.com/" rel="nofollow"><strong>sex chair</strong></a>,<a href="https://www.g-spotvibrators.com/" rel="nofollow"><strong>vibrators</strong></a>,<a href="https://www.sophisticatedsexdolls.com/" rel="nofollow"><strong>realistic sex dolls</strong></a>,<a href="https://www.vibratorshowtobuy.com/" rel="nofollow"><strong>horse dildo</strong></a> u103x3edfcv959thempihttps://www.blogger.com/profile/05588942371870822877noreply@blogger.comtag:blogger.com,1999:blog-5222542250352397862.post-52151611451962701952015-09-02T08:59:51.666-07:002015-09-02T08:59:51.666-07:00The approach of slicing larger processes into smal...The approach of slicing larger processes into smaller ones in order to reduce the max pause time seen in each is certainly common. It's what I call the "640KB" design style (which is more like 640MB these days, but same concept). The approach certainly works at some levels, with enough "duct tape" to keep things working. It's a good example of wasted engineering effort and inefficient design driven by the need to work around around a single problem: pause time that are too large to be acceptable.<br /><br />As to what scale the approach is efficient to, this varies dramatically. E.g. in the second tier of distributed cache systems, you can probably see N go to the low tens (within a physical system). But in most actual applications (you mention tomcat as a container), this approach usually caps out with N in the single digits (on a single system) before getting into problems. Hosting tens of JVMs that are mostly idle, or tens of JVMs that all carry nearly-identical work patterns seems to work on a single System. But going to tens of JVMs that are all active and all have disparate timing and working sets tends to lead to thrashing. A single active JVM is a hungry thing (especially when it is busy doing a multi-second GC). And tens of these hungry processes don't tend to make good neighbors.<br /><br />When any form of caching is involved, the inefficiency of splitting and replicating the cached data comes into play pretty early, too. You often end up either with a smaller cache per instance (most common) which results in higher miss rates in the instance-local cache and higher actual work that needs to be done as a result. Or you end up replicating the cache in each process (whether it's kept coherent or not), which leads to dramatically increased GC work in the system as a whole and still keeps GC pauses in each instance high.<br /><br />To the question about mainstream JVMs that are capable of releasing unused heap back to the OS. There is one that does this very well: Zing. It is completely elastic, with all pages (above a dedicated level) released back to the OS immediately as they are collected. Other HotSpot variants will also dynamically adjust their heap (down to Xms) if there is no pressure on it. But this much more slowly adjusting behavior is delayed, and only occurs when the JVM load or working set drops on the individual JVM dramatically for a long period of time (multiple oldgen cycles). Slowly releasing memory when idle doesn't do much for you when all those N JVMs are actually active at the same time, which would be an inherent behavior if they are just split-up portions of what would otherwise be a single active instance.<br /><br />As to the notion that it will take a long time to get Zing into your production data center: I'm often surprised at the sort of re-engineering people do aim at their production datacenter deployments (like splitting their processes up into lots of small pieces, with all the disruptive changes that entails) when the alternative is much simpler and easier to get through even the most rigorous testing and re-qualification processes. Lots of people use Zing in production datacenters, either in place of HotSpot or side by side (some apps use Zing, other use HotSpot). Shifting to using Zing is invariably easier to convince your datacenter folks to do than a redesign and re-architecture of the deployment of your nice working (except for pauses) application that would increase process counts by 10x for the same workload (and the rigorous testing under varying load conditions that would be needed to study the edges of the new load-driven failure modes and load-bearing behaviors that sort of fine-grain splitting creates).<br />Anonymoushttps://www.blogger.com/profile/15042659998856041778noreply@blogger.comtag:blogger.com,1999:blog-5222542250352397862.post-36050714711707497922015-06-02T06:42:24.561-07:002015-06-02T06:42:24.561-07:00Hi, just got here after listening to Your brillian...Hi, just got here after listening to Your brilliant "Understanding Java GC" presentation from 2012 (thank You a TON for compiling and sharing it!!)<br /><br />As much as I'd like us to try Zing out - it'd take lots of time before it reaches production datacenter,<br />and for short-term - I don't think the miracle of plugglable-GCs exists in JVMs (to use C4 instead of CMS).<br /><br />Since our throughput had grown - we're facing >100sec GC pauses, but RAM isn't over-utilized at all (live data is less than 5GB).<br />To mitigate the "length of GC pause exceeds human/software timeouts" - I'm planning to try a bit counter-intuitive approach:<br />- REDUCE the heap_size 10x times<br />- deploy 10x tomcat instances<br />- reduce live_set_size "almost 10x" times (balancer routes calls per user-hash, so we can reduce user-obj caches 10x, though static/config caches will remain same)<br /><br />If understood Your math corrcetly - this will cause each instance to have:<br />- almost-same frequency of fullGCs (allocation_rate/10, heap_size/10, live_set_size/almost_10)<br />- 10x shorter duration of each fullGC (heap_size/10)<br />And overall CPU use should remain same (same load spread into smaller buckets).<br /><br />I understand the approach is not that efficient in comparison to C4, but still effective up to some NNN-times scale (100x?), with following limitations:<br />1) at some scale the need to keep NNN copies of static/config caches (live set) should skew the "almost-NNNx" math into impractical<br />2) non-uniformness of user-data (large users) will start to suffer from walls of too-small-buckets<br /><br />Just curious if any mainstream JVM is capable of releasing unused heap (above -Xms level) back to OS? (this would've addressed the #2 above)Anonymoushttps://www.blogger.com/profile/03407798412599971081noreply@blogger.comtag:blogger.com,1999:blog-5222542250352397862.post-71888674662846449902015-03-09T00:44:17.925-07:002015-03-09T00:44:17.925-07:00This comment has been removed by a blog administrator.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5222542250352397862.post-81850766161261827952015-01-26T23:54:41.876-08:002015-01-26T23:54:41.876-08:00This comment has been removed by a blog administrator.Anonymoushttps://www.blogger.com/profile/02899734397307020225noreply@blogger.com