

In Substance 3D Sampler’s case, a key use of Content-Aware Fill is going to be to clean up materials based on source photos, with the user painting a rough freehand mask to identify the details to be removed. The tools remove unwanted details from images semi-automatically, sampling surrounding parts of the image to generate a clean replacement for the area removed. The potential benefits of Content-Aware Fill will be immediately apparent to anyone who has used the equivalent features in other Adobe apps: most famously, in Photoshop, and more recently, in After Effects. Remove unwanted details semi-automatically from all of the channels of a material at once The new functionality was previewed at the firm’s Substance Day event at GDC 2022. The Photoshop-style system enables users to remove unwanted details from texture maps or HDRIs simply by painting a freehand mask over the artefacts to be removed, with Sampler doing the rest of the work.

#Substance alchemist normal map update
Exporting textures from AlchemistĢ6 photo scanned rocks, stones and walls from Swiss Alps are available as game assets for UE5 on Artstation.Adobe has posted a sneak peek showing the new Content-Aware Fill feature in the next update to Substance 3D Sampler, its scan-based material-authoring tool, previously known as Substance Alchemist. In case when we want to use Alchemist for de-lighting we will skip the De-Lighter step, using xNormal to bake the original photo scanned texture, then give it to the Alchemist and produce also the base color in there. But with more complex meshes and lighting scenarios the Agisoft De-Lighter works better. It’s totally possible to use it for simple cases, for example when the mesh is very flat. We will call the normal map T_Mesh_N.tgaĪlchemist also allows you to do the de-lighting, but it doesn’t take the mesh shape into account. In this case we will save the combined texture as a new file called T_Mesh_DR.tga (R for Roughness). We will export roughness and normal maps, then for our Unreal material, we will add the roughness as alfa channel of the base color texture in Photoshop. Simply drag the T_Mesh_D.tga into that program and it will generate various textures using its AI based algorithms. Baking base texture in xNormalįor creating optional roughness and normal maps we use Substance Alchemist. This will give you the base color texture you can import in Unreal, usually we rename it in something like T_Mesh_D.tga (D for Difuse). Then in the Baking options tab specify the output file, here we will call it Mesh_xn.tga, check “Bake base texture” and hit Generate maps button. Specify Mesh.fbx as “High definition mesh”, Mesh_delit.tif as its “base texture to bake” and SM_Mesh.fbx as “Low definition mesh”. Finally we will freeze the transforms so rotation is 0 and scale is 1.įor baking the base color texture to the new mesh and its UVs we use xNormal. (you can scale it back in your level in Unreal). Even if the mesh is small IRL you need to scale it bigger. If the meshes are too small (few centimeters large), the tiny high-poly triangles become microscopic and that will cause issues with normals both for baking and for Unreal. We will first create the base color texture that matches the new mesh (bake), then, optionally, create a Normal and Roughness textures.īut before the baking begins we will open our 2 meshes in Maya together and check two things: they should be in the same place in the world (otherwise the bake will fail of course), and they should be at least 1m large or more. Right now we have the original photo scanned mesh, lets’ call it Mesh.fbx, it’s de-lit base color texture Mesh_delit.tif and the re-topologized mesh SM_Mesh.fbx with new UVs prepared for import in Unreal. In this step we will prepare textures to import in Unreal. See all parts of this breakdown: Photogrammetry: making Nanite meshes for UE5
