What is the max Avatar Complexity you will Render?

What is your avatar AC limit to render AT MOST for a performance?

  • Under 50k

  • 51-75k

  • 76-100k

  • 101-125k

  • 126--150k

  • 151k-175k

  • 176-200k

  • 201-250k

  • Unlimited/maxed

  • Other, I'll post in the comments


Results are only viewable after voting.

Mechanical

New member
Joined
Oct 21, 2018
Messages
47
SL Rez
2006
and I truly wish jewellery/accessories did not use so many textures, it’s just not necessary with even intermediate texturing skills.
So it's terribly common for jewelry to have 1024x1024 sized textures, sometimes multiple textures. Viewers can only zoom in normally so far, and these pieces of jewellery are still just a few pixels large for gems. So to see the items in their apparent intended detail, ctrl+0 tapped several times is needed. At that point what is the onlooker supposed to be looking at? How many people in SL know of that keyboard combo? Excuse me while I kneel down and peer at someone's navel piercing from a half inch away?
 

Myficals

Nein!
Joined
Sep 19, 2018
Messages
504
Location
a sunburnt country
SL Rez
2007
Joined SLU
Feb 2010
SLU Posts
4075
I generally run with a complexity around 120k, with exceptions for friends and other people I deal with on the regular who's complexities are higher for some reason.

If I were to attend a performance of a kind like you're describing, I think I'd be inclined to drop my default complexity to minimal, temporarily unexclude any friends present and then exclude the performers only. I guess with something like this I wouldn't really expect the performers to hold back and so I'd adjust my own viewing conditions to compensate.
 
  • 1Love
Reactions: Fauve Aeon

Tirellia

Cold and Wet
Joined
Sep 20, 2018
Messages
388
SL Rez
2007
Joined SLU
2014
SLU Posts
320
At a performance I would want to render the performers fully. If that meant my PC was threatening to melt, I'd derender everyone else.
 
  • 1Like
Reactions: Fauve Aeon

NiranV Dean

Animating Your Life
Joined
Sep 24, 2018
Messages
181
Location
Germany
SL Rez
2007
Joined SLU
Jan 2011
SLU Posts
1616
Penny Patton
The VRAM display is off?

I can tell you what's off with it. It accounts for what a given avatar is using in this very moment you refresh it. Avatars you don't look at might not load their textures (or unload them if looking away for too long) or they might downgrade the texture resolution and when you look at the VRAM usage it counts only what said avatar uses, not what its potential max would be if all textures were loaded at their maximum resolution. This is however something the entire ARC system has, it can only calculate what it knows, it doesn't know how much VRAM a texture uses if you don't use the biggest available resolution and it can't tell you how much ARC an object has if said object has only the lowest LOD loaded and the highest LOD is (for some reason) not yet readable. This will cause the complexity values and VRAM usage values to fluctuate depending on whether you are looking at someone or not and whether you're doing it from far away or from a very close distance. I'm already forcing the ARC calculation to use the max LOD triangle count rather than what it uses at the given time but this max LOD triangle count again just factors in whatever this max LOD is at the moment, if it is unknown or only halfway built it will show lower numbers until you refresh it. I'm unsure whether i can fix the VRAM thing as it would require loading and decoding the biggest texture size available of any given texture at least once to determine an accurate memory usage value of any texture, i COULD just play the guessing game and calculate it manually e.g (1024x1024 = 4MB, 512 = 1MB and so on) but that would be highly hacky and might not include any special cases such as alpha channels or special texture color channels (RGB, RGBA, RGBA8 and so on, all of which can possibly have an impact too). I've really tried to make the complexity tools as transparent and straight forward as possible, to give as much information although i might revise it at a later point to include texture count, texture resolutions and some clearer labels such as "current VRAM usage" rather than "VRAM usage" to make it clear that this is not a final value but rather an ever changing value depending on the situation.

But thanks for your feedback anyway, as little as it was, it got me thinking again... of another thing to put onto my ever growing To-Do list of doom.
 

Fauve Aeon

🌽🐍Mostly Jellicle...🐍🌽
Joined
Oct 29, 2018
Messages
237
Location
SL, Kitely, IMVU
SL Rez
2008
Joined SLU
Not sure of date/postcount
Texture count in Firestorm is helping me strip off some textures from things that don’t need them. I also made a much simpler prim hair to replace a sculpty monstrosity, using 1 nice texture to good advantage mirrored and flipped, messing with the repeats. It stripped 20,000 off my arc plus it was fun to make. On another project, I used SL’s built in bump map options, (wood and weave) on 2 surfaces in a photo frame. I tinted them and added a blank shine map, adjusting the numbers, using no textures at all. I’m pleased!

My costume hair WIP. Copycat of a very old long-gone Tayū hair.


The hair it will replace.
 
Last edited:

Chalice Yao

The Purple
Joined
Sep 20, 2018
Messages
451
Location
Somewhere Purple, Germany
SL Rez
2007
Joined SLU
Dec 2007
SLU Posts
9108
i COULD just play the guessing game and calculate it manually e.g (1024x1024 = 4MB, 512 = 1MB and so on) but that would be highly hacky and might not include any special cases such as alpha channels or special texture color channels (RGB, RGBA, RGBA8 and so on, all of which can possibly have an impact too)
This is actually the safe way to go about it, really.
You will get the max VRAM usage that would essentially hit you if you zoomed close to the avatar.
As for the channels - it's easier than you think, as nowadays pretty much all modern graphics cards internally use RGBA for unpacked textures - and even if a texture has no alpha channel, it will get converted to that format. And all SL textures are 24bit per channel. So X * y * 4 is the de-facto correct calculation in the end if you want to get the vidya card ram usage. (The exception is if you enable compressed textures, but HNNNGH those are horrible in SL)
 

Penny Patton

Graphical Moo
Joined
Sep 30, 2018
Messages
48
Penny Patton
But thanks for your feedback anyway, as little as it was, it got me thinking again... of another thing to put onto my ever growing To-Do list of doom.
You're welcome! I'd really love to see Black Dragon add the separate VRAM and Triangle jelly doll caps that Firestorm opted not to let their users have access to. It's an amazingly useful feature. Both for performance, and managing one's own VRAM use.
 

NiranV Dean

Animating Your Life
Joined
Sep 24, 2018
Messages
181
Location
Germany
SL Rez
2007
Joined SLU
Jan 2011
SLU Posts
1616
This is actually the safe way to go about it, really.
You will get the max VRAM usage that would essentially hit you if you zoomed close to the avatar.
As for the channels - it's easier than you think, as nowadays pretty much all modern graphics cards internally use RGBA for unpacked textures - and even if a texture has no alpha channel, it will get converted to that format. And all SL textures are 24bit per channel. So X * y * 4 is the de-facto correct calculation in the end if you want to get the vidya card ram usage. (The exception is if you enable compressed textures, but HNNNGH those are horrible in SL)
Even then however i still have the issue that the Viewer doesn't know the size of a texture unless it downloads it. So even if it knew the size i'd have to tell it to download it first (even if i were to trash it immediately) how would the Viewer even keep track of this... this sounds like a monstrously complex and very easy to break tool, one that will probably never be accurate. Unless you know something i don't which is what i guess you do.

You're welcome! I'd really love to see Black Dragon add the separate VRAM and Triangle jelly doll caps that Firestorm opted not to let their users have access to. It's an amazingly useful feature. Both for performance, and managing one's own VRAM use.
I'll look into it. ARC already includes VRAM usage (although the usage complexity values might be a bit low) but adding a separate VRAM limit wouldn't hurt i guess but then again i expect it to be buggy and depending on the above.
 

Penny Patton

Graphical Moo
Joined
Sep 30, 2018
Messages
48
I'll look into it. ARC already includes VRAM usage (although the usage complexity values might be a bit low)
That's the problem, ARC severely downplays VRAM use. I have seen avatars with an ARC around 60k, but their VRAM use was pushing 1GB. That's an extreme case, but I regularly see avatars with low ARC scores but crazy high VRAM and/or triangle count. Whenever I've brought down my own avatar's texture use it has had minimal effect on my ARC score.

It's worth noting that when LL introduced jelly dolls, the average ARC seemed to drop over the following years. But VRAM use and triangle counts continue to rise, because content creators aren't being discouraged from pushing content consumer computers cannot be reasonably expected to handle in an environment like SL, and LL doesn't even give people the tools they need to manage it even if they're aware of the issue.
 

NiranV Dean

Animating Your Life
Joined
Sep 24, 2018
Messages
181
Location
Germany
SL Rez
2007
Joined SLU
Jan 2011
SLU Posts
1616
That's the problem, ARC severely downplays VRAM use. I have seen avatars with an ARC around 60k, but their VRAM use was pushing 1GB. That's an extreme case, but I regularly see avatars with low ARC scores but crazy high VRAM and/or triangle count. Whenever I've brought down my own avatar's texture use it has had minimal effect on my ARC score.

It's worth noting that when LL introduced jelly dolls, the average ARC seemed to drop over the following years. But VRAM use and triangle counts continue to rise, because content creators aren't being discouraged from pushing content consumer computers cannot be reasonably expected to handle in an environment like SL, and LL doesn't even give people the tools they need to manage it even if they're aware of the issue.
I guess its time for Niran to strike again then. Oz will absolutely hate me, that is if he doesn't already.

Every time i touch something Oz said not to i imagine this spider-sense tingling Oz, he's like "Ugh, it's Niran again, he's touching my code again... i specifically told him NOT TO touch that. Grrr"

EDIT:
Currently textures are calculated as follows:
S32 texture_cost = 256 + (S32)(ARC_TEXTURE_COST * (img->getFullHeight() / 128.f + img->getFullWidth() / 128.f));

If we were to plug this with values of a normal 1024x1024 texture, we'd get:
S32 texture_cost = 256 + (S32)(ARC_TEXTURE_COST * (1024 / 128.f + 1024 / 128.f));
The ARC_TEXTURE_COST are a static 16 points:
S32 texture_cost = 256 + (S32)(16 * (1024 / 128.f + 1024 / 128.f));
In the end we'd get this:
2432 = 256 + (16 * (8 + 8))
Someone with 1GB memory usage would be at 1024MB / 5MB (per texture) = 204,8 * 2432 = 498074 ARC rounded up.
I agree that for someone who single-handedly takes up the entire allocatable texture memory in non-BD Viewers we should have much higher values than measly 500k. I thought about making the usage and ARC the same, e.g one 1024x1024 = 5MB = 5120 ARC. Sounds good? Someone with 1GB usage would have 1024000 ARC baseline.
 
Last edited:

Chalice Yao

The Purple
Joined
Sep 20, 2018
Messages
451
Location
Somewhere Purple, Germany
SL Rez
2007
Joined SLU
Dec 2007
SLU Posts
9108
Someone with 1GB memory usage would be at 1024MB / 5MB (per texture) = 204,8 * 2432 = 498074 ARC rounded up.
I agree that for someone who single-handedly takes up the entire allocatable texture memory in non-BD Viewers we should have much higher values than measly 500k. I thought about making the usage and ARC the same, e.g one 1024x1024 = 5MB = 5120 ARC. Sounds good? Someone with 1GB usage would have 1024000 ARC baseline.
I agree that the ARC cost of textures should directly scale with the memory usage of textures. The way it's done by LL right now is horrible. Large textures with a magnitude more memory use hardly make a difference compared to tiny textures.

So, essentially 4 times the memory usage in a texture should mean 4 times the ARC value for that texture. Just like 4 times the triangles should mean 4 times the ARC for those triangles.
Consider the following tho, in regards to 1 byte equaling 1 ARC:
Let's presume a 600k limit for triangles and 100mb limit for textures - that would by those guidelines equal 700k ARC as a limit (ignoring small modifiers). However it would also mean somebody using 300k triangles could suddenly use 400mb of textures. An unrealistic trade-off, performance-wise.

So, a suggestion would be: 1M ARC limit (Yes, wait for it). 2 unrigged triangles count as 1 ARC, 1 rigged triangle counts as 2 ARC, 1 byte of textures counts as 4 ARC. Aformentioned higher ARC for rigged triangles because the viewer needs to go through them twice at least for the rigging pass - thrice if accounting for shadows adding another rigging pass to that. A metric ton of rigged triangles is bad for performance.
600k unrigged triangles + 175mb textures = 1M.
200k rigged triangles + 150mb textures = 1M.
300k unrigged triangles + 212mb textures = 1M.

And all things and mixtures in between, again ignoring small modifiers like blended (not masked) alpha textures, which might require a flat additional ARC value to them independant of size - texture size hardly affects the blended alpha impact, it can be heavy in either case.
Just a suggestion. I'm just coming up with this off the top of my head - it would need testing, testing and more testing and visiting actual SL places to see what the experience would be. Perhaps the limit needs to be 2M ARC, perhaps textures need to count higher, or unrigged triangles need to count less. Trial and error and performance measurements ahoy. I am also not sure how mesh LODs would get worked in. I've not looked at those calculations.
But a good way to start is to determine the maximum amount of textures you want on an avatar, determine the maximum amount of ARC purely based on that alone, and then work in the triangle tradeoff. In this case it would be 250mb of textures without triangle accounting.

Even then however i still have the issue that the Viewer doesn't know the size of a texture unless it downloads it. So even if it knew the size i'd have to tell it to download it first (even if i were to trash it immediately) how would the Viewer even keep track of this... this sounds like a monstrously complex and very easy to break tool, one that will probably never be accurate. Unless you know something i don't which is what i guess you do.
Nah, I don't have a Big Secret or anything - the complexity calculations kick in when textures are already being loaded, and I'm not sure where to insert any premature checks.

The way I do it is essentially checking the VRAM usage of loaded textures on every complexity update that is caused by anything texture related (loading attachments, detaching attachments, dirty textures - not for things like LOD updates and the like), and as soon as they cross a threshold jellybeaning kicks in.
While this does *not* prevent other textures of that avatar from being still downloaded *if* they have already been in the download pipeline, it does prevent further additional texture requests, which can help a lot - usually it stops a couple of megabytes above the set limit. Especially in scenes with multiple avatars that are above that, it can do a lot for performance.

EDIT:
That is to say, I'm doing this because I do explicit VRAM jellybeaning. The ARC fix for textures as described above could make this obsolete.
 
Last edited:

NiranV Dean

Animating Your Life
Joined
Sep 24, 2018
Messages
181
Location
Germany
SL Rez
2007
Joined SLU
Jan 2011
SLU Posts
1616
Okay so this is what it currently looks like:
S32 texture_cost = (S32)((ARC_TEXTURE_COST * (texture->getFullHeight() * texture->getFullWidth())) / 1024);

This results in each 1024x1024 being 5120 ARC, so essentially one KB is one ARC. It seems to fit pretty good into the rest of my calculation.

Now this is the rest:

C++:
//BD - Experimental new ARC
// per-prim costs
//BD - Particles need to be punished extremely harsh, they are besides all other features, the single biggest
//     performance hog in Second Life. Just having them enabled and a tiny bunch around drops the framerate
//     noticeably.
static const U32 ARC_PARTICLE_COST = 16;
//BD - Textures don't directly influence performance impact on a large scale but allocating a lot of textures
//     and filling the Viewer memory as well as texture memory grinds at the Viewer's overall performance, the
//     lost performance does not fully recover when leaving the area in question, textures overall have a lingering
//     performance impact that slowly drives down the Viewer's performance, we should punish them much harder.
//     Textures are not free after all and not everyone can have 2+GB texture memory for SL.
static const U32 ARC_TEXTURE_COST = 5;
//BD - Lights are an itchy thing. They don't have any impact if used carefully. They do however have an
//     increasingly bigger impact above a certain threshold at which they will significantly drop your average
//     FPS. We should punish them slightly but not too hard otherwise Avatars with a few lights get overpunished.
static const U32 ARC_LIGHT_COST = 512;
//BD - Projectors have a huge impact, whether or not they cast a shadow or not, multiple of these will make quick
//     work of any good framerate.
static const U32 ARC_PROJECTOR_COST = 4096;
//BD - Media faces have a huge impact on performance, they should never ever be attached and should be used
//     carefully. Punish them with extreme measure, besides, by default we can only have 6-8 active at any time
//     those alone will significantly draw resources both RAM and FPS.
static const U32 ARC_MEDIA_FACE_COST = 10000; // static cost per media-enabled face

// per-prim multipliers
//BD - Glow has nearly no impact, the impact is already there due to the omnipresent ambient glow Black Dragon
//     uses, putting up hundreds of glowing prims does nothing, it's a global post processing effect.
static const F32 ARC_GLOW_MULT = 1.05f;
//BD - Bump has nearly no impact, it's biggest impact is texture memory which we really shouldn't be including.
static const F32 ARC_BUMP_MULT = 1.05f;
//BD - I'm unsure about flexi, on one side its very efficient but if huge amounts of flexi are active at the same
//     time they can quickly become extremely slow which is hardly ever the case.
static const F32 ARC_FLEXI_MULT = 1.15f;
//BD - Shiny has nearly no impact, it's basically a global post process effect.
static const F32 ARC_SHINY_MULT = 1.05f;
//BD - Invisible prims are not rendered anymore in Black Dragon.
//static const F32 ARC_INVISI_COST = 2.0f;
//BD - Weighted mesh does have quite some impact and it only gets worse with more triangles to transform.
static const F32 ARC_WEIGHTED_MESH = 2.5f;

//BD - Animated textures hit quite hard, not as hard as quick alpha state changes.
static const F32 ARC_ANIM_TEX_COST = 2.f;
//BD - Alpha's are bad.
static const F32 ARC_ALPHA_COST = 2.0f;
//BD - Alpha's aren't that bad as normal alphas if they are rigged and worn, static ones are evil.
//     Besides, as long as they are fully invisible Black Dragon won't render them anyway.
static const F32 ARC_RIGGED_ALPHA_COST = 1.25f;
//BD - In theory animated mesh are pretty limited and they are rendering wise not different to normal avatars.
//     Thus they should not be weighted differently, however, since they are just basic dummy avatars with no
//     super extensive information, relations, name tag and so on they deserve a tiny complexity discount.
static const F32 ARC_ANIMATED_MESH_COST = 0.95f;
A few things worth noting, unlike the official viewer's calculation i do not employ one-off multipliers anymore that multiply an object/attachments ARC by a given factor when a single face or prim in a linkset has a certain feature. Not only is it unrealistic to assume that the entire object is alpha just because a single face is it also plays into the horribly random and inaccurate ARC values we're seeing. Why get punished for a single alpha face in an entire linkset of 255 prims all with 8 faces each, that's ridiculous and pretty much doesn't do anything anyway given that everything seems to be random and free in LL's calculation anyway (unless it is a light, media, flexi or 100% alpha).

Instead my ARC counts each object and it's faces, determines which features are being used (is it glow, alpha, does it have media and so on) and determines which features are used on the prim itself (such as flexi, rigged, unrigged, mesh) and "adds" rather than multiplies the prims baseline ARC ontop of the already present ARC. So rather than the ARC multipliers accumulating the more there are, they are instead stacked. Let's say a prim has alpha and glows, we take the triangle count, calculate the ARC from that, take the ARC of said prim again and multiply it with the ARC multipliers, then subtract the baseline ARC (pretty sure we could just take 0. decimal values as multiplier... but hey... safe is safe), we end up with only the "extra ARC" from said features, we pile this extra ARC into a separate value until all calculations are through, then add this piled up extra ARC ontop of the base object ARC. What we get are some decently accurate numbers that actually achieve what ARC is supposed to do, reliably get rid of over-demanding avatars. I'm proud to say that wearing Maitreya or Belleza gets you instantly jellydolled unless you're completely naked and not wearing any extra body parts from SLINK or else.... or your ARC limit is just way too forgiving.

So far i had great results both myself and users given i don't immediately tell them how to disable jellydolling when the occasional question came up what these colorful sprites are all about. In fact i don't tell people at all how to disable it, they also get a hefty lecture from me what ARC means and why their avatar is essentially a waste of screen space for both themselves and everyone around.

The only thing that's stopping me from ruling the world currently is LL's incredibly stupid (but funny) script-ARC system that reads out your Viewer's reported ARC of yourself and then kicks people off the SIM based on it when on my Viewer. Hilarious.

PS: I'll be tweaking a few values again, specifically those 10k for a media surface are a thorn in my eye.
 
Last edited:

Plurabelle Laszlo

Well-known member
Joined
Sep 20, 2018
Messages
164
SL Rez
2007
Joined SLU
2011
Another question I have is...do you cam in on performers on a stage at all, a little, a lot? I always have so details like eyelashes are nice but I’m not sure they are terribly visible especially with stage makeup...debating, maybe if we all wear the same ones...and I truly wish jewelery/accessories did not use so many textures, it’s just not necessary with even intermediate texturing skills.
I tend to zoom close, especially into faces, be it performers, or people I interact with directly. From my experience SLers zooming habits vary a lot. There are those who find detailed jewelry ridiculous because they rarely ever zoom close enough to see any difference in textures, and others who appreciate those details a lot, and keep their cam either very close on their counterpart - or (I am guilty of that) vainly zoom themselves all the time. Others use those detailed items only for photography or on empty sims. As jewelry maker I get a lot of positive feedback for stuff that is very detailed, it s a constant struggle to make content that sells and is not ridiculously texture- or polycount-heavy. That being said, of course jewelry does never need multiple 1024x1024 textures.
 

Chalice Yao

The Purple
Joined
Sep 20, 2018
Messages
451
Location
Somewhere Purple, Germany
SL Rez
2007
Joined SLU
Dec 2007
SLU Posts
9108
A few things worth noting, unlike the official viewer's calculation i do not employ one-off multipliers anymore that multiply an object/attachments ARC by a given factor when a single face or prim in a linkset has a certain feature. Not only is it unrealistic to assume that the entire object is alpha just because a single face is it also plays into the horribly random and inaccurate ARC values we're seeing. Why get punished for a single alpha face in an entire linkset of 255 prims all with 8 faces each, that's ridiculous and pretty much doesn't do anything anyway given that everything seems to be random and free in LL's calculation anyway
Yeah, I consider the multiplier of the whole object's ARC as a horrible idea. It's why I suggested a flat additional X ARC per texture that's alpha per object, above. Multiplying the entire overall ARC because of a small alpha face (or a small flexie piece, etc.) just does not make sense.

Instead my ARC counts each object and it's faces, determines which features are being used (is it glow, alpha, does it have media and so on) and determines which features are used on the prim itself (such as flexi, rigged, unrigged, mesh) and "adds" rather than multiplies the prims baseline ARC ontop of the already present ARC.
Yas, kind of like that. Though I would just add a flat ARC amount per alpha face (or texture), etc, instead of just adding once. It seems the best middle ground.

The only thing that's stopping me from ruling the world currently is LL's incredibly stupid (but funny) script-ARC system that reads out your Viewer's reported ARC of yourself and then kicks people off the SIM based on it when on my Viewer. Hilarious.
That is...amazingly bad design, and probably the reason why they say not to change the ARC calculation - the sim goes by what the viewer reports for the LSL. Hah.
 

NiranV Dean

Animating Your Life
Joined
Sep 24, 2018
Messages
181
Location
Germany
SL Rez
2007
Joined SLU
Jan 2011
SLU Posts
1616
Yas, kind of like that. Though I would just add a flat ARC amount per alpha face (or texture), etc, instead of just adding once. It seems the best middle ground.
Yea, i was pretty sure i already had it like that... i did it for the complexity floater... not sure why i didn't do it there.

I'd assume the problem is as follows, you get the triangle count of an entire object but you can't get it of just one face (at least not that i know) and if i were to do it like this i'd want maximum accuracy and i'd only be happy if i were to add the feature multipliers to those triangles that are actually part of the face using said feature. Since this is currently not possible i'll have to do with a global per-prim (not per object) ARC addition. The other option as you already implied would be having a fixed ARC cost for each feature, but determining completely static ARC values will just end up being closer to LL's shit again than something that works in a useful way to the user. So i'd guess we already get the best middle-ground with this by taking the smallest possible unit (a prim) and applying it to that. Besides we already count alpha into the textures too even when the texture does not contain alpha we still penalize the texture for it.

That is...amazingly bad design, and probably the reason why they say not to change the ARC calculation - the sim goes by what the viewer reports for the LSL. Hah.
Yea, it's amazingly stupid indeed. I had several reports of my users getting kicked from their usual visiting places because they employ these ARC-kickers. Hella funny when i consider i could walk in there just fine.
 

Vaelissa Cortes

New member
Joined
Sep 20, 2018
Messages
55
SL Rez
2007
I'm proud to say that wearing Maitreya or Belleza gets you instantly jellydolled unless you're completely naked and not wearing any extra body parts from SLINK or else.... or your ARC limit is just way too forgiving..
The problem I see with this, and it's a big one, is that once popular mesh bodies/heads/etc. are downloaded, their actual FPS impact often isn't anywhere near as bad as one might assume just from the total triangle count of the linkset. Most of those triangles are from the often hidden applier layers, and as far as I can tell, 100% transparent rigged meshes get culled from the renderer.

This typically adds up to hundreds of thousands of triangles that aren't being rendered at all, but still get penalized. Even doing just a quick test looking at rendered tris and measuring FPS, there's a clear, very noticable difference in performance with those layers disabled. Don't get me wrong, the whole so-called onion layered construction of these typically no mod meshes is stupid and irritates me to no end, they are the definition of excessive (especially Belleza), but penalizing culled faces doesn't seem to be ideal.
 
Last edited:

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
5,383
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
I agree that the ARC cost of textures should directly scale with the memory usage of textures. The way it's done by LL right now is horrible. Large textures with a magnitude more memory use hardly make a difference compared to tiny textures.

So, essentially 4 times the memory usage in a texture should mean 4 times the ARC value for that texture.
That is how it currently works according to Niran:

S32 texture_cost = 256 + (S32)(ARC_TEXTURE_COST * (img->getFullHeight() / 128.f + img->getFullWidth() / 128.f));
That's exactly proportional to area, which means it's exactly proportional to memory use. The question is how much the texture costs should be scaled. Or am I missing something?
 

Chalice Yao

The Purple
Joined
Sep 20, 2018
Messages
451
Location
Somewhere Purple, Germany
SL Rez
2007
Joined SLU
Dec 2007
SLU Posts
9108
That is how it currently works according to Niran:
That's exactly proportional to area, which means it's exactly proportional to memory use.
You would think so at very first glance, but let me toss the actual results of said formula for some texture sizes:

32x32: 264
64x64: 272
128x128: 288
256x256: 320
512x512: 384
1024x1024: 512

ARC_TEXTURE_COST in the formula that Niran posted has a value of...16.
A 1024x1024 texture literally gets treated as roughly twice as bad as a 32x32.
Here are the raw memory usages as a reminder:

32x32: 4kb
64x64: 16kb
128x128: 64kb
256x256: 256kb
512x512: 1MB
1024x1024: 4MB

The issue is that the calculation essentially goes "256 + (16 * (a fraction of the texture's width + a fraction of the texture's height))". The entire thing makes the fraction and multiplication so small, that the added 256 by far outweights it. I have no idea what they were thinking.

But the source code says that this was "performance tested". Probably by comparing the load speed of a single texture in a synthetic benchmark and empty sim, while ignoring 'real-life' SL situations and scenes, internet download speeds and GPU VRAM limits. I dunno.
 
Last edited:

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
5,383
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
Oh, sorry, I misread that as "256 *" not "256 +".

Thinking about whay that is so...

256 might be too big but there probably does need to be some overhead for lots of small textures instead of one larger one. Switching textures, for example, has more overhead than moving the visible portion of a larger texture. This encourages combined textures and that's a good thing.
 

Chalice Yao

The Purple
Joined
Sep 20, 2018
Messages
451
Location
Somewhere Purple, Germany
SL Rez
2007
Joined SLU
Dec 2007
SLU Posts
9108
Oh, quite. An overhead makes sense, and I think 256 is a good value for it. It's just what it's being added to that's...just wrong :|
Something along the lines of 256 * (height * width * 4) would make some sense IMO.