Initial support for higher precision property texture types#13135
Initial support for higher precision property texture types#13135mzschwartz5 wants to merge 14 commits intomainfrom
Conversation
|
Thank you for the pull request, @mzschwartz5! ✅ We can confirm we have a CLA on file for you. |
|
Python script I used to generate the test data: import base64
import numpy as np
from PIL import Image
def float_to_rgba_bytes_array(f_arr):
f32 = f_arr.astype(np.float32) # f_arr: float32 numpy array shape (H, W)
f32 = f32.astype('<f4') # explicit little-endian float32
bytes_view = f32.view(np.uint8).reshape(f32.shape + (4,)) # shape (H, W, 4)
# bytes_view[0] is the first byte in memory; little-endian means least significant byte first
# PNG expects channel order R,G,B,A, so map directly:
rgba = bytes_view.copy()
return rgba # dtype uint8
def create_gradient_png(path, width, height):
# Horizontal and vertical gradients from 0.0 .. 1.0 combined into one float
xs = np.linspace(0.0, 1.0, width, dtype=np.float32)
ys = np.linspace(0.0, 1.0, height, dtype=np.float32)
row = xs[np.newaxis, :] # shape (1, W)
col = ys[:, np.newaxis] # shape (H, 1)
grid = (row + col) / 2.0 # shape (H, W), values 0..1 blending horizontal+vertical
rgba = float_to_rgba_bytes_array(grid)
# Ensure alpha is not premultiplied/modified; bytes already contain the float bits
img = Image.fromarray(rgba, mode='RGBA')
img.save(path, format='PNG', optimize=False)
print(f"Wrote {path} ({width}x{height})")
with open(path, 'rb') as f:
b64 = base64.b64encode(f.read()).decode('ascii')
data_uri = 'data:image/png;base64,' + b64
print(data_uri)
return data_uri
if __name__ == '__main__':
create_gradient_png('float_rgba_gradient.png', width=256, height=256) |
97f1927 to
aae28df
Compare
| * A 64-bit signed integer. This type requires BigInt support. | ||
| * | ||
| * @see FeatureDetection.supportsBigInt | ||
| * A 64-bit signed integer. |
There was a problem hiding this comment.
Removed this (and some other related stuff) as bigint is widely supported now.
| export const ScalarCategories = { | ||
| INTEGER: "int", | ||
| UNSIGNED_INTEGER: "uint", | ||
| FLOAT: "float", | ||
| }; | ||
|
|
||
| MetadataComponentType.typeInfo = { |
There was a problem hiding this comment.
Rather than using lots of switch statements whenever we want to know if a component type is an integer, vector compatible, etc., I refactored this class to use object maps.
| ]; | ||
|
|
||
| // Map from scalar component type to the GLSL function used to reinterpret from uint bits to the scalar type | ||
| const uintBitsToScalarType = { |
There was a problem hiding this comment.
Better name suggestion welcome...
| return integerTypesByComponentCount[componentCount]; | ||
| }; | ||
|
|
||
| PropertyTextureProperty.prototype.unpackInShader = function ( |
There was a problem hiding this comment.
This is the meat of this PR. It's admittedly quite complex, and I'm open to suggestions on how to simplify and improve this bit (or maybe that's just how dynamically generated code is...)
The goal of this function is to write code to sample and unpack a property value from a texture. Such code can take various, slightly different forms, depending on the data types involved, texture channels used, etc.
The end result in the shader is meant to look something like this (pseudo-code from the structural metadata extension spec):
vec4<uint8> rgba = texture(sampler, coordinates);
uint8 byte0 = rgba[channels[0]];
uint8 byte1 = rgba[channels[1]];
uint8 byte2 = rgba[channels[2]];
uint8 byte3 = rgba[channels[3]];
uint32 rawBits = byte0 | (byte1 << 8) | (byte2 << 16) | (byte3 << 24);
float32 value = uintBitsToFloat(rawBits);
or this (in the case of a VEC2 property):
vec4<uint8> rgba = texture(sampler, coordinates);
uint8 byte0 = rgba[channels[0]];
uint8 byte1 = rgba[channels[1]];
uint8 byte2 = rgba[channels[2]];
uint8 byte3 = rgba[channels[3]];
value[0] = byte0 | (byte1 << 8);
value[1] = byte2 | (byte3 << 8);
With some extra stuff for declaring variables, casting to the right type, normalizing if necessary... etc
There was a problem hiding this comment.
Also, to complicate things a bit further, a property can be an array, a vector-type, or an array-of-vector-type!
In the shader, all of these simply end up as a vecN type (up to a vec4 - anything bigger will fail validation)
There was a problem hiding this comment.
This is the meat of this PR. It's admittedly quite complex, and I'm open to suggestions on how to simplify and improve this bit (or maybe that's just how dynamically generated code is...)
I found it pretty clear, don't really see any obvious ways to make it simpler.
| @@ -0,0 +1,35 @@ | |||
| /** | |||
There was a problem hiding this comment.
We have some builtins like this already: unpackFloat, unpackUint. They seem to be targeted at WebGL 1 whereas this builtin targets webgl2 and is type-agnostic (just unpacks to uint bits).
I'm open to another name besides unpackTexture, since this is technically usable for other purposes.
There was a problem hiding this comment.
Maybe these should be called czm_unpackUint and the old ones should be called czm_unpackUintWebgl1?
There was a problem hiding this comment.
Hmm... the old functions are named with the target type. So unpackFloat unpacks 4 bytes "packed as a little-endian unsigned normalized vec4" to a float.
The new functions unpack to a uint, true, but only as an intermediate - for the bits to then be reinterpreted as some other type. So the name unpackUint still doesn't seem right to me.
There was a problem hiding this comment.
True...
I was also considering the word reinterpret, like reinterpretTextureSampleAsUint, but that doesn't quite fit either.
The current wording is probably good enough.
| if (!context.webgl2) { | ||
| return; | ||
| } |
There was a problem hiding this comment.
Guarding these behind webgl2 check as the CI webgl stub seems to run a WebGL 1 context. (Obviously not ideal...)
javagl
left a comment
There was a problem hiding this comment.
Some inlined comments.
Beyond that, I created a basic sandcastle for some local/manual tests. This sandcastle contains some of the functions that are taken from
to create different "flavors" of property textures. For now, the following is just a basic test, posted in a form that is currently tweaked to show aVEC2/INT16(normalized) form of the texture, and allows to quickly check that 1. it basically works, and 2. it takes the offset/scale into account properly.
import * as Cesium from "cesium";
const viewer = new Cesium.Viewer("cesiumContainer", {
globe: false
});
/**
* Creates an embedded glTF asset with a property texture.
*
* This creates an assed that represents a unit square and uses
* the `EXT_structural_metadata` extension to assign a single
* property texture to this square.
*
* @param {object} schema The metadata schema
* @param {object} propertyTextureProperties The property texture properties
* @returns The gltf
*/
function createEmbeddedGltfWithPropertyTexture(
schema,
propertyTextureProperties,
) {
const result = {
extensions: {
EXT_structural_metadata: {
schema: schema,
propertyTextures: [
{
class: "exampleClass",
properties: propertyTextureProperties,
},
],
},
},
extensionsUsed: ["EXT_structural_metadata"],
accessors: [
{
bufferView: 0,
byteOffset: 0,
componentType: 5123,
count: 6,
type: "SCALAR",
max: [3],
min: [0],
},
{
bufferView: 1,
byteOffset: 0,
componentType: 5126,
count: 4,
type: "VEC3",
max: [1.0, 1.0, 0.0],
min: [0.0, 0.0, 0.0],
},
{
bufferView: 1,
byteOffset: 48,
componentType: 5126,
count: 4,
type: "VEC3",
max: [0.0, 0.0, 1.0],
min: [0.0, 0.0, 1.0],
},
{
bufferView: 1,
byteOffset: 96,
componentType: 5126,
count: 4,
type: "VEC2",
max: [1.0, 1.0],
min: [0.0, 0.0],
},
],
asset: {
generator: "JglTF from https://github.com/javagl/JglTF",
version: "2.0",
},
buffers: [
{
uri: "data:application/gltf-buffer;base64,AAABAAIAAQADAAIAAAAAAAAAAAAAAAAAAACAPwAAAAAAAAAAAAAAAAAAgD8AAAAAAACAPwAAgD8AAAAAAAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAAAAAgD8AAAAAAACAPwAAgD8AAAAAAAAAAAAAAAAAAAAAAACAPwAAAAAAAAAA",
byteLength: 156,
},
],
bufferViews: [
{
buffer: 0,
byteOffset: 0,
byteLength: 12,
target: 34963,
},
{
buffer: 0,
byteOffset: 12,
byteLength: 144,
byteStride: 12,
target: 34962,
},
],
images: [
{
// A 16x16 pixels image that contains all combinations of
// (0, 127, 255) in its upper-left 9x9 pixels
uri: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAi0lEQVR42u2RUQ6AMAhDd3OO/qQt8VP8NRHjNpf0leI5ruqXbNVL4c9Dn+E8ljV+iLaXaoAY1YDaADaynBg2gFZLR1+wAdJEWZpW1AIVqmjCruqybw4qnEJbbQBHdWoS2XIUXdp+F8DNUOpM0tIZCusQJrzHNTnsOy2pFTZ7xpKhYFUu4M1v+OvrdQGABqEpS2kSLgAAAABJRU5ErkJggg==",
mimeType: "image/png",
},
],
materials: [
{
pbrMetallicRoughness: {
baseColorFactor: [1.0, 1.0, 1.0, 1.0],
metallicFactor: 0.0,
roughnessFactor: 1.0,
},
alphaMode: "OPAQUE",
doubleSided: true,
},
],
meshes: [
{
primitives: [
{
extensions: {
EXT_structural_metadata: {
propertyTextures: [0],
},
},
attributes: {
POSITION: 1,
NORMAL: 2,
TEXCOORD_0: 3,
},
indices: 0,
material: 0,
mode: 4,
},
],
},
],
nodes: [
{
mesh: 0,
},
],
samplers: [
{
magFilter: 9728,
minFilter: 9728,
},
],
scene: 0,
scenes: [
{
nodes: [0],
},
],
textures: [
{
sampler: 0,
source: 0,
},
{
sampler: 0,
source: 1,
},
],
};
return result;
}
/**
* Create an embedded glTF with the default property texture,
* and the given schema and property texture properties.
*
* @param {object} schema The JSON form of the metadata schema
* @param {object[]} properties The JSON form of the property texture properties
* @returns The glTF
*/
function createPropertyTextureGltf(schema, properties) {
const gltf = createEmbeddedGltfWithPropertyTexture(schema, properties);
/*/
// Copy-and-paste this into a file to have the actual glTF:
console.log("SPEC GLTF:");
console.log("-".repeat(80));
console.log(JSON.stringify(gltf, null, 2));
console.log("-".repeat(80));
//*/
return gltf;
}
/**
* Creates the glTF for the 'scalar' test case
*
* @returns The glTF
*/
function createPropertyTextureGltfScalar() {
const schema = {
id: "ExampleSchema",
classes: {
exampleClass: {
name: "Example class",
properties: {
example_UINT8_SCALAR: {
name: "Example SCALAR property with UINT8 components",
type: "SCALAR",
componentType: "UINT8",
},
},
},
},
};
const properties = {
example_UINT8_SCALAR: {
index: 0,
texCoord: 0,
channels: [0],
},
};
return createPropertyTextureGltf(schema, properties);
}
/**
* Creates the glTF for the normalized 'vec2' test case
*
* @param {number[]|undefined} classPropertyOffset The optional offset
* that will be defined in the class property definition
* @param {number[]|undefined} classPropertyScale The optional scale
* that will be defined in the class property definition
* @param {number[]|undefined} metadataPropertyOffset The optional offset
* that will be defined in the property texture property definition
* @param {number[]|undefined} metadataPropertyScale The optional scale
* that will be defined in the property texture property definition
* @returns The glTF
*/
function createPropertyTextureGltfNormalizedVec2(
classPropertyOffset,
classPropertyScale,
metadataPropertyOffset,
metadataPropertyScale,
) {
const schema = {
id: "ExampleSchema",
classes: {
exampleClass: {
name: "Example class",
properties: {
example_normalized_UINT8_VEC2: {
name: "Example VEC2 property with normalized UINT8 components",
type: "VEC2",
componentType: "UINT8",
normalized: true,
offset: classPropertyOffset,
scale: classPropertyScale,
},
},
},
},
};
const properties = {
example_normalized_UINT8_VEC2: {
index: 0,
texCoord: 0,
channels: [0, 1],
offset: metadataPropertyOffset,
scale: metadataPropertyScale,
},
};
return createPropertyTextureGltf(schema, properties);
}
/**
* Creates the glTF for the normalized 'vec2' test case with INT16
*
* @param {number[]|undefined} classPropertyOffset The optional offset
* that will be defined in the class property definition
* @param {number[]|undefined} classPropertyScale The optional scale
* that will be defined in the class property definition
* @param {number[]|undefined} metadataPropertyOffset The optional offset
* that will be defined in the property texture property definition
* @param {number[]|undefined} metadataPropertyScale The optional scale
* that will be defined in the property texture property definition
* @returns The glTF
*/
function createPropertyTextureGltfNormalizedInt16Vec2(
classPropertyOffset,
classPropertyScale,
metadataPropertyOffset,
metadataPropertyScale,
) {
const schema = {
id: "ExampleSchema",
classes: {
exampleClass: {
name: "Example class",
properties: {
example_normalized_INT16_VEC2: {
name: "Example VEC2 property with normalized INT16 components",
type: "VEC2",
componentType: "INT16",
normalized: true,
offset: classPropertyOffset,
scale: classPropertyScale,
},
},
},
},
};
const properties = {
example_normalized_INT16_VEC2: {
index: 0,
texCoord: 0,
channels: [0, 1, 2, 3],
offset: metadataPropertyOffset,
scale: metadataPropertyScale,
},
};
return createPropertyTextureGltf(schema, properties);
}
/**
* Creates the glTF for the test case with boolean
*
* @returns The glTF
*/
function createPropertyTextureGltfBoolean() {
const schema = {
id: "ExampleSchema",
classes: {
exampleClass: {
name: "Example class",
properties: {
example_fixed_length_BOOLEAN_array: {
name : "Example fixed-length BOOLEAN array property",
description : "An example fixed length array property, with type BOOLEAN",
type : "BOOLEAN",
array : true,
count : 4
},
},
},
},
};
const properties = {
example_fixed_length_BOOLEAN_array: {
index: 0,
texCoord: 0,
channels: [0, 1, 2, 3],
},
};
return createPropertyTextureGltf(schema, properties);
}
/**
* Move the camera to look at the unit square along -X
*
* @param {Camera} camera
*/
function fitCameraToUnitSquare(camera) {
const fov = Cesium.Math.PI_OVER_THREE;
camera.frustum.fov = fov;
camera.frustum.near = 0.01;
camera.frustum.far = 100.0;
const additionalOffset = 2.0;
const distance = 1.0 / (2.0 * Math.tan(fov * 0.5)) + additionalOffset;
camera.position = new Cesium.Cartesian3(distance, 0.5, 0.5);
camera.direction = Cesium.Cartesian3.negate(Cesium.Cartesian3.UNIT_X, new Cesium.Cartesian3());
camera.up = Cesium.Cartesian3.clone(Cesium.Cartesian3.UNIT_Z);
camera.right = Cesium.Cartesian3.clone(Cesium.Cartesian3.UNIT_Y);
}
/**
* Create a model from the given glTF, add it as a primitive
* to the given scene, and wait until it is fully loaded.
*
* @param {Scene} scene The scene
* @param {object} gltf The gltf
*/
async function loadAsModel(scene, gltf) {
const basePath = "SPEC_BASE_PATH";
const model = await Cesium.Model.fromGltfAsync({
gltf: gltf,
basePath: basePath,
// This is important to make sure that the property
// texture is fully loaded when the model is rendered!
incrementallyLoadTextures: false,
});
scene.primitives.add(model);
return model;
}
//*/
const classPropertyOffset = [1.0, 2.0];
const classPropertyScale = [2.0, 3.0];
// These should override the values from the class property:
const metadataPropertyOffset = [3.0, 4.0];
const metadataPropertyScale = [4.0, 5.0];
const gltf = createPropertyTextureGltfNormalizedInt16Vec2(
classPropertyOffset,
classPropertyScale,
metadataPropertyOffset,
metadataPropertyScale,
);
//*/
//const gltf = createPropertyTextureGltfBoolean();
const model = await loadAsModel(viewer.scene, gltf);
// TODO Have to adjust the property name/type here based on the
// createProperty... test function that is called...
const customShader = new Cesium.CustomShader({
fragmentShaderText: `
void fragmentMain(FragmentInput fsInput, inout czm_modelMaterial material)
{
vec2 value = fsInput.metadata.example_normalized_INT16_VEC2;
material.diffuse = vec3(value.x / 10.0, value.y / 10.0, 1.0);
}
`,
});
model.customShader = customShader;
fitCameraToUnitSquare(viewer.scene.camera);One could go very far and try to be very "exhaustive" here. In their current state, these functions had been tailored for the UINT8 case. For sensible tests for other types, a different property texture may make more sense. And I think that at some point, we could/should consider moving these createPropertyTexture... functions into some dedicated Specs/Utilities class/file. But that's probably beyond the scope of this PR (or would only come into play if we wanted to do more exhaustive specs for all this).
|
|
||
| it("getMaximum throws without type", function () { | ||
| expect(function () { | ||
| MetadataComponentType.getMaximum(); |
There was a problem hiding this comment.
Okay... I'm not entirely sure how to deal with that.
Below, there's the test
"getMinimum throws if type is not a numeric type"
that checks whether
MetadataComponentType.getMinimum(MetadataComponentType.STRING);
throws a DeveloperError.
And that passes. But looking at the code, I wondered: Why does it pass?
The getMinimum function is currently doing a
return MetadataComponentType.typeInfo[type].minimumValue;
and given that typeInfo does not contain an entry for STRING, this should throw a TypeError.
But... MetadataComponentType.STRING does not exist, meaning that
MetadataComponentType.getMinimum(MetadataComponentType.STRING);
is actually
MetadataComponentType.getMinimum(undefined);
and that does throw a DeveloperError only because of the
Check.typeOf.string("type", type);
in getMinimum.
Regardless of what ~"is currently done", and yes, even regardless of which "regressions/'breaking changes'" this causes, I think that we should try to move towards doing the right thing. And that is: The getMinimum/getMaximum... functions should handle "unexpected values" in a predetermined way. For example:
/**
* ...
* @throws DeveloperError When the given type is not a valid numeric type [TODO maybe list them...]
*
* @private
*/
MetadataComponentType.getMinimum = function (type) {
//>>includeStart('debug', pragmas.debug);
Check.typeOf.string("type", type); //---------------- TODO maybe omit this?
//>>includeEnd('debug');
const typeInfo = MetadataComponentType.typeInfo[type];
if (!defined(typeInfo)) {
throw new DeveloperError(`Type ${type} is not a numeric type`); // Proper message, with the "type"!
}
return typeInfo.minimumValue;
};
Specifically, the behavior should be "defined" (and spec'ed) for
getMinimum(undefined)getMinimum("STRING")getMinimum("INVALID_TYPE_FOR_SPEC")getMinimum("UINT64")getMinimum("UINT32")
Note:
We can talk about the details. For example: isIntegerType already does check whether the typeInfo is undefined, meaning that
isIntegerType("INVALID_TYPE_FOR_SPEC")
would return false (and not throw!).
We could also say that
getMinimum("INVALID_TYPE_FOR_SPEC")
should not throw, but return undefined. But I'd argue against that: It would hide the root cause of errors - for example, when someone accidentally passed in "VEC3" to this function (mixing up type and componentType).
There was a problem hiding this comment.
I'm with you, and I think your proposal makes sense (DeveloperError where appropriate, true / false or other specific handling where appropriate, and removing the debug pragma string checks).
There was a problem hiding this comment.
I do think the isIntegerType type functions should also throw a developer error if the input arg isn't in the mapping.
Same reasoning you gave above - this is probably unexpected behavior that we don't want to hide.
There was a problem hiding this comment.
I'm usually in favor of 1. strict checks and 2. avoiding to "fail silently".
(A simple error case that is ignored can easily lead to these bugs where "sometimes (with my secret data), something is not rendered" - these can be real time sinks).
In this area, some care has to be taken, because there has been that EXT_feature_metadata extension that is still supposed to be working in CesiumJS. It is nearly equal to the current one, but there are some really suble differences. E.g. type could still be STRING or ARRAY back then. So maybe these cases should not throw when being passed to isIntegerType? Or maybe the check should be done earlier or dedicatedly, meaning that the call to isIntegerType should not be done...? Some of that may have to be decided on a case-by-case-basis with some tradeoffs.
There was a problem hiding this comment.
Hmm interesting. I'm not really familiar with the old extension. Aside from type potentially being string or array, are there any other such differences? Or, put another way, is it sufficient to omit these exceptions from isIntegerType (and isUnsigned... etc)?
There was a problem hiding this comment.
Or, perhaps, should we include those exceptions anyway - after all, the API is marked as experimental and subject to changing without the standard deprecation policy (for this very reason - so we can change things to how they should be without bending over backwards).
There was a problem hiding this comment.
There was bit of back and forth revolving around type and componentType, and the role that they played in these extensions. And it's hard to be 100% sure about the "right" handling for helper methods like isIntegerType.
The (idealistically) strict solution would be that
isIntegerType("STRING")
throws, but it might very well be that this function (or things like getMinimum etc) is casually called ~"somewhere", maybe as part of dealing with the legacy extension, and anticipating that it does not throw and simply returns undefined for the given types.
So it is not clear whether it could throw for STRING or ARRAY or BOOLEAN or ENUM (at least, not without finding and analyzing all call sites of that function, and/or checking the specs (and hoping that they cover everything relevant (and that they cover everything properly, and not contain glitches like that MetadataComponentType.STRING being undefined))).
But I think that "completely invalid" strings should definitely throw - and that should happen explicitly, not only in debug pragmas.
The number of data sets that use EXT_feature_metadata is probably close to 0, but ... it has to be handled. And it's not clear which implicit assumptions are baked into the code and the legacy handling.
One approach might be (with a bit of pseudocode):
// As needed...
const integerComponentTypes = [ "UINT8" ... "INT64" ];
const floatingPointComponentTypes = [ "FLOAT32"... ];
const legacyComponentTypes = [ "BOOLEAN", "STRING, "ENUM"]; // Verify that...
MetadataComponentType.isIntegerType = function (type) {
if (integerComponentTypes.includes(type)) return true;
if (floatingPointComponentTypes.includes(type)) return false;
if (legacyComponentTypes .includes(type)) {
console.log("Meh... 😕");
return false;
}
const up = new DeveloperError("What is "+type+"?");
throw up;
}
Yes, this is pseudocode. The TypeInfo would be nice, maybe there is still a way to pull this through. Things like the isVectorCompatible in this structure or the signed/unsigned distinction could be done with
const numericComponentTypes = [ "UINT8" ... "INT64", "FLOAT32", "FLOAT64"];
const signedIntegerComponentTypes = [ "INT8" ... "INT64" ];
const unsignedIntegerComponentTypes = [ "UINT8" ... "UINT64" ];
but maybe(!), conversely, there could be TypeInfo entries for the legacyComponentTypes from that pseudocode...?
(I had to deal with similar things in the 3D Tiles Tools, but this does not have to handle the legacy extensions, and I partially adapted some code from CesiumJS, without larger refactorings...)
| return false; | ||
| } | ||
| return true; | ||
| if (type === MetadataType.STRING) { |
There was a problem hiding this comment.
I remember 👴 that this was once all "fail silent", and there had been quite a few cases where I've been stumped by "Why is nothing happening here?". I added these oneTimeWarning cases in a44ac07 based on what was supported back then.
Looking at the new code, I'll have to check whether/how type==="BOOLEAN" and type==="ENUM" are handled. (Maybe they are, implicitly - but I'll have to read the shader generation part more thoroughly to understand whether there's already a shader-side handling of that)
There was a problem hiding this comment.
Having tested everything together in #13163, I can say that, in fact: ENUM and BOOLEAN types do not work in this PR.
In theory, ENUM types should work just fine, but I didn't realize they don't have a componentType (they instead have their own special valueType, which we convert to a componentType). That gets fixed in #13163.
BOOLEANs, on the other hand, are complicated by the fact that the spec says they should be packed into buffers as bitstreams. The new unpackInShader function I wrote, which uses the new czm_unpackTexture function, does not account for this packing scheme. It operates on the assumption that values are packed into one or more bytes, not bits.
It's theoretically possible to support but, to avoid scope creep, I added a check to isGpuCompatible in #13163 that excludes booleans too.
| // (Note that it's possible to treat 64-bit types as two 32-bit components, but for now that's not supported) | ||
| const componentCount = MetadataType.getComponentCount(type); | ||
| const arrayLength = classProperty.isArray ? classProperty.arrayLength : 1; | ||
| const bytesPerComponent = MetadataComponentType.getSizeInBytes(componentType); |
There was a problem hiding this comment.
(See other comment as well): componentType may be undefined here, e.g. when type==="BOOLEAN".
There was a problem hiding this comment.
Yep - wish I had read this before debugging the issue myself!
Addressed in #13163
There was a problem hiding this comment.
This is an example of the sort of breaking change this PR causes.
This sandcastle has a custom shader that assigns the metadata to int type variables. However, the metadata is actually UINT8 type in the tileset. Before this PR, we would cast to int because that's all that's supported by WebGL 1. After this PR, we cast it to uint, the actual type, so the custom shader has to change its assignments.
I think this is fine. |
|
@mzschwartz5 that's all from me, looks good. We can get this merged after CHANGES.md is updated. I'll take a look at #13163 to see the enum / bool handling. |
Description
Until now, property textures only supported
UINT8data. You could pack multiple properties into a single texture, using thergbachannels to represent up to 4UINT8properties, but you couldn't do the inverse - use multiple channels to represent a single property of higher precision.With this PR, you can now represent 32-bit types, including the full-range of unsigned integers (previously, even if a glTF specified
UINTas a property's data type, it would be cast tointon the glsl side, to be compatible with WebGL1, which doesn't have auinttype, and thus could lose precision).Note: these improvements are restricted to a WebGL 2 context.
Note 2: this represents a breaking change for users using a WebGL 2 context who have custom shaders that treat unsigned integer data as signed.
TODO before merging: decide how to address the breaking change (feature flag? We need this feature to be available pretty soon for borehole visualization. If we use a feature flag, we need it to be user-settable, but the
MetadataPipelineStageis internal...)Or just document it as a breaking change, potentially? The classes surrounding metadata and property textures are all tagged as experimental and subject to change without the typical deprecation policy.
Issue number and link
#13129
Testing plan
On your local machine, copy this gltf file into the
Apps/SampleDatafolder and name itSimplePropertyTexture.Also copy this tileset.json file to the same directory (and name it
tileset.json)Start cesiumjs and open this sandcastle
By default, it should display nonsense - that's the gradient texture when the
RGBAchannels are interpreted literally as colors. If you toggle the view to "Inside Temperature", you now see a grayscale gradient - each pixel of the texture is reinterpreted as a single 32-bit float, which has been constructed / populated to appear as a gradient when used as brightness.Author checklist
CONTRIBUTORS.mdCHANGES.mdwith a short summary of my change