当前位置: 首页 > news >正文

Unity URP RenderTexture优化(二):深度图优化

目录

前言:

一、定位深度信息

1.1:k_DepthStencilFormat

1.2:k_DepthBufferBits

1.3:_CameraDepthTexture精度与大小

1.4:_CameraDepthAttachment数量

二、全代码


前言:

在上一篇文章:Unity URP RenderTexture优化_unity urp优化-CSDN博客中的末尾我提到了优化_CameraDepthAttachment的建议,说实话其并不是最终的优化效果,当我们启用深度图开关后,依然会生成很多张深度副本,其大小依然时15MB左右,只有当关闭深度图渲染时,它们才会被修改的代码清除掉。若是项目需要深度信息,但这么多的RT资源对移动设备消耗太高,我们又该如何去优化它们呢?

好了,接下来我会以我的经验来优化它们。

本次文章核心优化UnityURP管线的深度信息,根据管线版本不同,也许URP的核心代码会有所不同,但我会讲解优化方向,以便于对于不同版本URP管线做到适配。以下版本为Unity2022.3.37f1.

一、定位深度信息

采用VSCode编码工具是个不错的选择,它不仅打开速度比VisualStudio快许多,查找内容也方便很多。我们依然需要修改UniversalRender脚本的源代码(UniversalRedner脚本路径为:Packages\com.unity.render-pipelines.universal@14.0.11\Runtime\UniversalRender【实际就是包文件路径的URP包里Runtime文件夹下】

VSCode的Ctrl+F能帮助我快速找到含有depth的属性变量或者方法。

1.1:k_DepthStencilFormat

深度图的颜色格式,跟RenderTexture创建设置一样。

在URP源码中默认是 GraphicsFormat.D32_SFloat_S8_UInt,对于移动端来说我们并不需要这么高的颜色格式,可以修改为GraphicsFormat.D24_UNorm_S8_UInt或者是 D16_UNorm等D开头的GraphicsFormat,具体的需要查看格式是否适配,在GraphicsFormat中也有对各个格式的详细介绍:

在颜色格式精度低会导致深度信息不够,视觉效果就会有穿模现象,所以需要根据项目具体调整,这里我就根据配置按如下设置了

具体根据项目做调整

1.2:k_DepthBufferBits

URP默认是32位,此为深度Buffer位图精度,跟RenderTexture设置一样

数值需要设置为2的幂,常见为8,16,24,32。

同样如果太低可能会导致穿模问题,需要根据设置具体查看是否适配,我这里默认设置为16

1.3:_CameraDepthTexture精度与大小

这张图就表明了我对_CameraDepthTexture精度与大小的修改,红框中添加了一个对GraphicsFormat.R32_SFloat支持判断,因为有的设备可能不支持这么高的精度,不支持就可以走黄色部分的逻辑。

在红框中,我对颜色格式设置为无,精度都是按照我们上方设置好的,长宽都减半。

在黄框中,根据设备支持程度,颜色格式都有对应的适配调整,同样长度和宽度都减半。

这样你最后就会得到一个长宽减半,精度修改的_CameraDepthTexture。

1.4:_CameraDepthAttachment数量

上述实际并没有改变多少,大头仍然在深度图副本,它的数量太多,且大小和精度也太高。

核心在第一个红框的释放,能够大大减少副本的生成数量,第二个红框为正交相机深度的适配,看项目需求,若是没用到可以不修改,或者是在一开始就将k_DepthBufferBits改为24,因为16的精度太小了会导致穿模问题。

以上就是我对于深度图的优化,全代码如下(包含上篇文章的RT)

二、全代码


using System;
using System.Collections.Generic;
using System.Linq;
using UnityEngine.Experimental.Rendering;
using UnityEngine.Rendering.Universal.Internal;namespace UnityEngine.Rendering.Universal
{/// <summary>/// Rendering modes for Universal renderer./// </summary>public enum RenderingMode{/// <summary>Render all objects and lighting in one pass, with a hard limit on the number of lights that can be applied on an object.</summary>Forward = 0,/// <summary>Render all objects and lighting in one pass using a clustered data structure to access lighting data.</summary>[InspectorName("Forward+")]ForwardPlus = 2,/// <summary>Render all objects first in a g-buffer pass, then apply all lighting in a separate pass using deferred shading.</summary>Deferred = 1};/// <summary>/// When the Universal Renderer should use Depth Priming in Forward mode./// </summary>public enum DepthPrimingMode{/// <summary>Depth Priming will never be used.</summary>Disabled,/// <summary>Depth Priming will only be used if there is a depth prepass needed by any of the render passes.</summary>Auto,/// <summary>A depth prepass will be explicitly requested so Depth Priming can be used.</summary>Forced,}/// <summary>/// Default renderer for Universal RP./// This renderer is supported on all Universal RP supported platforms./// It uses a classic forward rendering strategy with per-object light culling./// </summary>public sealed partial class UniversalRenderer : ScriptableRenderer{
#if UNITY_IOS || UNITY_ANDROID || UNITY_EDITORconst GraphicsFormat k_DepthStencilFormat = GraphicsFormat.D24_UNorm_S8_UInt;const int k_DepthBufferBits = 16;
#elseconst GraphicsFormat k_DepthStencilFormat = GraphicsFormat.D32_SFloat_S8_UInt;const int k_DepthBufferBits = 32;
#endifconst int k_FinalBlitPassQueueOffset = 1;const int k_AfterFinalBlitPassQueueOffset = k_FinalBlitPassQueueOffset + 1;static readonly List<ShaderTagId> k_DepthNormalsOnly = new List<ShaderTagId> { new ShaderTagId("DepthNormalsOnly") };private static class Profiling{private const string k_Name = nameof(UniversalRenderer);public static readonly ProfilingSampler createCameraRenderTarget = new ProfilingSampler($"{k_Name}.{nameof(CreateCameraRenderTarget)}");}/// <inheritdoc/>public override int SupportedCameraStackingTypes(){switch (m_RenderingMode){case RenderingMode.Forward:case RenderingMode.ForwardPlus:return 1 << (int)CameraRenderType.Base | 1 << (int)CameraRenderType.Overlay;case RenderingMode.Deferred:return 1 << (int)CameraRenderType.Base;default:return 0;}}// Rendering mode setup from UI. The final rendering mode used can be different. See renderingModeActual.internal RenderingMode renderingModeRequested => m_RenderingMode;// Actual rendering mode, which may be different (ex: wireframe rendering, hardware not capable of deferred rendering).internal RenderingMode renderingModeActual => renderingModeRequested == RenderingMode.Deferred && (GL.wireframe || (DebugHandler != null && DebugHandler.IsActiveModeUnsupportedForDeferred) || m_DeferredLights == null || !m_DeferredLights.IsRuntimeSupportedThisFrame() || m_DeferredLights.IsOverlay)? RenderingMode.Forward: this.renderingModeRequested;bool m_Clustering;internal bool accurateGbufferNormals => m_DeferredLights != null ? m_DeferredLights.AccurateGbufferNormals : false;#if ADAPTIVE_PERFORMANCE_2_1_0_OR_NEWERinternal bool needTransparencyPass { get { return !UniversalRenderPipeline.asset.useAdaptivePerformance || !AdaptivePerformance.AdaptivePerformanceRenderSettings.SkipTransparentObjects;; } }
#endif/// <summary>Property to control the depth priming behavior of the forward rendering path.</summary>public DepthPrimingMode depthPrimingMode { get { return m_DepthPrimingMode; } set { m_DepthPrimingMode = value; } }DepthOnlyPass m_DepthPrepass;DepthNormalOnlyPass m_DepthNormalPrepass;CopyDepthPass m_PrimedDepthCopyPass;MotionVectorRenderPass m_MotionVectorPass;MainLightShadowCasterPass m_MainLightShadowCasterPass;AdditionalLightsShadowCasterPass m_AdditionalLightsShadowCasterPass;GBufferPass m_GBufferPass;CopyDepthPass m_GBufferCopyDepthPass;DeferredPass m_DeferredPass;DrawObjectsPass m_RenderOpaqueForwardOnlyPass;DrawObjectsPass m_RenderOpaqueForwardPass;DrawObjectsWithRenderingLayersPass m_RenderOpaqueForwardWithRenderingLayersPass;DrawSkyboxPass m_DrawSkyboxPass;CopyDepthPass m_CopyDepthPass;CopyColorPass m_CopyColorPass;TransparentSettingsPass m_TransparentSettingsPass;DrawObjectsPass m_RenderTransparentForwardPass;InvokeOnRenderObjectCallbackPass m_OnRenderObjectCallbackPass;FinalBlitPass m_FinalBlitPass;CapturePass m_CapturePass;
#if ENABLE_VR && ENABLE_XR_MODULEXROcclusionMeshPass m_XROcclusionMeshPass;CopyDepthPass m_XRCopyDepthPass;
#endif
#if UNITY_EDITORCopyDepthPass m_FinalDepthCopyPass;
#endifDrawScreenSpaceUIPass m_DrawOffscreenUIPass;DrawScreenSpaceUIPass m_DrawOverlayUIPass;internal RenderTargetBufferSystem m_ColorBufferSystem;internal RTHandle m_ActiveCameraColorAttachment;RTHandle m_ColorFrontBuffer;internal RTHandle m_ActiveCameraDepthAttachment;internal RTHandle m_CameraDepthAttachment;RTHandle m_XRTargetHandleAlias;internal RTHandle m_DepthTexture;RTHandle m_NormalsTexture;RTHandle m_DecalLayersTexture;RTHandle m_OpaqueColor;RTHandle m_MotionVectorColor;RTHandle m_MotionVectorDepth;ForwardLights m_ForwardLights;DeferredLights m_DeferredLights;RenderingMode m_RenderingMode;DepthPrimingMode m_DepthPrimingMode;CopyDepthMode m_CopyDepthMode;bool m_DepthPrimingRecommended;StencilState m_DefaultStencilState;LightCookieManager m_LightCookieManager;IntermediateTextureMode m_IntermediateTextureMode;bool m_VulkanEnablePreTransform;// Materials used in URP Scriptable Render PassesMaterial m_BlitMaterial = null;Material m_BlitHDRMaterial = null;Material m_CopyDepthMaterial = null;Material m_SamplingMaterial = null;Material m_StencilDeferredMaterial = null;Material m_CameraMotionVecMaterial = null;Material m_ObjectMotionVecMaterial = null;PostProcessPasses m_PostProcessPasses;internal ColorGradingLutPass colorGradingLutPass { get => m_PostProcessPasses.colorGradingLutPass; }internal PostProcessPass postProcessPass { get => m_PostProcessPasses.postProcessPass; }internal PostProcessPass finalPostProcessPass { get => m_PostProcessPasses.finalPostProcessPass; }internal RTHandle colorGradingLut { get => m_PostProcessPasses.colorGradingLut; }internal DeferredLights deferredLights { get => m_DeferredLights; }/// <summary>/// Constructor for the Universal Renderer./// </summary>/// <param name="data">The settings to create the renderer with.</param>public UniversalRenderer(UniversalRendererData data) : base(data){// Query and cache runtime platform info first before setting up URP.PlatformAutoDetect.Initialize();#if ENABLE_VR && ENABLE_XR_MODULEExperimental.Rendering.XRSystem.Initialize(XRPassUniversal.Create, data.xrSystemData.shaders.xrOcclusionMeshPS, data.xrSystemData.shaders.xrMirrorViewPS);
#endifm_BlitMaterial = CoreUtils.CreateEngineMaterial(data.shaders.coreBlitPS);m_BlitHDRMaterial = CoreUtils.CreateEngineMaterial(data.shaders.blitHDROverlay);m_CopyDepthMaterial = CoreUtils.CreateEngineMaterial(data.shaders.copyDepthPS);m_SamplingMaterial = CoreUtils.CreateEngineMaterial(data.shaders.samplingPS);m_StencilDeferredMaterial = CoreUtils.CreateEngineMaterial(data.shaders.stencilDeferredPS);m_CameraMotionVecMaterial = CoreUtils.CreateEngineMaterial(data.shaders.cameraMotionVector);m_ObjectMotionVecMaterial = CoreUtils.CreateEngineMaterial(data.shaders.objectMotionVector);StencilStateData stencilData = data.defaultStencilState;m_DefaultStencilState = StencilState.defaultValue;m_DefaultStencilState.enabled = stencilData.overrideStencilState;m_DefaultStencilState.SetCompareFunction(stencilData.stencilCompareFunction);m_DefaultStencilState.SetPassOperation(stencilData.passOperation);m_DefaultStencilState.SetFailOperation(stencilData.failOperation);m_DefaultStencilState.SetZFailOperation(stencilData.zFailOperation);m_IntermediateTextureMode = data.intermediateTextureMode;if (UniversalRenderPipeline.asset?.supportsLightCookies ?? false){var settings = LightCookieManager.Settings.Create();var asset = UniversalRenderPipeline.asset;if (asset){settings.atlas.format = asset.additionalLightsCookieFormat;settings.atlas.resolution = asset.additionalLightsCookieResolution;}m_LightCookieManager = new LightCookieManager(ref settings);}this.stripShadowsOffVariants = true;this.stripAdditionalLightOffVariants = true;
#if ENABLE_VR && ENABLE_VR_MODULE
#if PLATFORM_WINRT || PLATFORM_ANDROID// AdditionalLightOff variant is available on HL&Quest platform due to performance consideration.this.stripAdditionalLightOffVariants = !PlatformAutoDetect.isXRMobile;
#endif
#endifForwardLights.InitParams forwardInitParams;forwardInitParams.lightCookieManager = m_LightCookieManager;forwardInitParams.forwardPlus = data.renderingMode == RenderingMode.ForwardPlus;m_Clustering = data.renderingMode == RenderingMode.ForwardPlus;m_ForwardLights = new ForwardLights(forwardInitParams);//m_DeferredLights.LightCulling = data.lightCulling;this.m_RenderingMode = data.renderingMode;this.m_DepthPrimingMode = data.depthPrimingMode;this.m_CopyDepthMode = data.copyDepthMode;#if UNITY_ANDROID || UNITY_IOS || UNITY_TVOSthis.m_DepthPrimingRecommended = false;
#elsethis.m_DepthPrimingRecommended = true;
#endif// Note: Since all custom render passes inject first and we have stable sort,// we inject the builtin passes in the before events.m_MainLightShadowCasterPass = new MainLightShadowCasterPass(RenderPassEvent.BeforeRenderingShadows);m_AdditionalLightsShadowCasterPass = new AdditionalLightsShadowCasterPass(RenderPassEvent.BeforeRenderingShadows);#if ENABLE_VR && ENABLE_XR_MODULEm_XROcclusionMeshPass = new XROcclusionMeshPass(RenderPassEvent.BeforeRenderingOpaques);// Schedule XR copydepth right after m_FinalBlitPassm_XRCopyDepthPass = new CopyDepthPass(RenderPassEvent.AfterRendering + k_AfterFinalBlitPassQueueOffset, m_CopyDepthMaterial);
#endifm_DepthPrepass = new DepthOnlyPass(RenderPassEvent.BeforeRenderingPrePasses, RenderQueueRange.opaque, data.opaqueLayerMask);m_DepthNormalPrepass = new DepthNormalOnlyPass(RenderPassEvent.BeforeRenderingPrePasses, RenderQueueRange.opaque, data.opaqueLayerMask);if (renderingModeRequested == RenderingMode.Forward || renderingModeRequested == RenderingMode.ForwardPlus){m_PrimedDepthCopyPass = new CopyDepthPass(RenderPassEvent.AfterRenderingPrePasses, m_CopyDepthMaterial, true);}if (this.renderingModeRequested == RenderingMode.Deferred){var deferredInitParams = new DeferredLights.InitParams();deferredInitParams.stencilDeferredMaterial = m_StencilDeferredMaterial;deferredInitParams.lightCookieManager = m_LightCookieManager;m_DeferredLights = new DeferredLights(deferredInitParams, useRenderPassEnabled);m_DeferredLights.AccurateGbufferNormals = data.accurateGbufferNormals;m_GBufferPass = new GBufferPass(RenderPassEvent.BeforeRenderingGbuffer, RenderQueueRange.opaque, data.opaqueLayerMask, m_DefaultStencilState, stencilData.stencilReference, m_DeferredLights);// Forward-only pass only runs if deferred renderer is enabled.// It allows specific materials to be rendered in a forward-like pass.// We render both gbuffer pass and forward-only pass before the deferred lighting pass so we can minimize copies of depth buffer and// benefits from some depth rejection.// - If a material can be rendered either forward or deferred, then it should declare a UniversalForward and a UniversalGBuffer pass.// - If a material cannot be lit in deferred (unlit, bakedLit, special material such as hair, skin shader), then it should declare UniversalForwardOnly pass// - Legacy materials have unamed pass, which is implicitely renamed as SRPDefaultUnlit. In that case, they are considered forward-only too.// TO declare a material with unnamed pass and UniversalForward/UniversalForwardOnly pass is an ERROR, as the material will be rendered twice.StencilState forwardOnlyStencilState = DeferredLights.OverwriteStencil(m_DefaultStencilState, (int)StencilUsage.MaterialMask);ShaderTagId[] forwardOnlyShaderTagIds = new ShaderTagId[]{new ShaderTagId("UniversalForwardOnly"),new ShaderTagId("SRPDefaultUnlit"), // Legacy shaders (do not have a gbuffer pass) are considered forward-only for backward compatibilitynew ShaderTagId("LightweightForward") // Legacy shaders (do not have a gbuffer pass) are considered forward-only for backward compatibility};int forwardOnlyStencilRef = stencilData.stencilReference | (int)StencilUsage.MaterialUnlit;m_GBufferCopyDepthPass = new CopyDepthPass(RenderPassEvent.BeforeRenderingGbuffer + 1, m_CopyDepthMaterial, true);m_DeferredPass = new DeferredPass(RenderPassEvent.BeforeRenderingDeferredLights, m_DeferredLights);m_RenderOpaqueForwardOnlyPass = new DrawObjectsPass("Render Opaques Forward Only", forwardOnlyShaderTagIds, true, RenderPassEvent.BeforeRenderingOpaques, RenderQueueRange.opaque, data.opaqueLayerMask, forwardOnlyStencilState, forwardOnlyStencilRef);}// Always create this pass even in deferred because we use it for wireframe rendering in the Editor or offscreen depth texture rendering.m_RenderOpaqueForwardPass = new DrawObjectsPass(URPProfileId.DrawOpaqueObjects, true, RenderPassEvent.BeforeRenderingOpaques, RenderQueueRange.opaque, data.opaqueLayerMask, m_DefaultStencilState, stencilData.stencilReference);m_RenderOpaqueForwardWithRenderingLayersPass = new DrawObjectsWithRenderingLayersPass(URPProfileId.DrawOpaqueObjects, true, RenderPassEvent.BeforeRenderingOpaques, RenderQueueRange.opaque, data.opaqueLayerMask, m_DefaultStencilState, stencilData.stencilReference);bool copyDepthAfterTransparents = m_CopyDepthMode == CopyDepthMode.AfterTransparents;RenderPassEvent copyDepthEvent = copyDepthAfterTransparents ? RenderPassEvent.AfterRenderingTransparents : RenderPassEvent.AfterRenderingSkybox;m_CopyDepthPass = new CopyDepthPass(copyDepthEvent,m_CopyDepthMaterial,shouldClear: true,copyResolvedDepth: RenderingUtils.MultisampleDepthResolveSupported() && SystemInfo.supportsMultisampleAutoResolve && copyDepthAfterTransparents);// Motion vectors depend on the (copy) depth texture. Depth is reprojected to calculate motion vectors.m_MotionVectorPass = new MotionVectorRenderPass(copyDepthEvent + 1, m_CameraMotionVecMaterial, m_ObjectMotionVecMaterial, data.opaqueLayerMask);m_DrawSkyboxPass = new DrawSkyboxPass(RenderPassEvent.BeforeRenderingSkybox);m_CopyColorPass = new CopyColorPass(RenderPassEvent.AfterRenderingSkybox, m_SamplingMaterial, m_BlitMaterial);
#if ADAPTIVE_PERFORMANCE_2_1_0_OR_NEWERif (needTransparencyPass)
#endif{m_TransparentSettingsPass = new TransparentSettingsPass(RenderPassEvent.BeforeRenderingTransparents, data.shadowTransparentReceive);m_RenderTransparentForwardPass = new DrawObjectsPass(URPProfileId.DrawTransparentObjects, false, RenderPassEvent.BeforeRenderingTransparents, RenderQueueRange.transparent, data.transparentLayerMask, m_DefaultStencilState, stencilData.stencilReference);}m_OnRenderObjectCallbackPass = new InvokeOnRenderObjectCallbackPass(RenderPassEvent.BeforeRenderingPostProcessing);m_DrawOffscreenUIPass = new DrawScreenSpaceUIPass(RenderPassEvent.BeforeRenderingPostProcessing, true);m_DrawOverlayUIPass = new DrawScreenSpaceUIPass(RenderPassEvent.AfterRendering + k_AfterFinalBlitPassQueueOffset, false); // after m_FinalBlitPass{var postProcessParams = PostProcessParams.Create();postProcessParams.blitMaterial = m_BlitMaterial;postProcessParams.requestHDRFormat = GraphicsFormat.B10G11R11_UFloatPack32;var asset = UniversalRenderPipeline.asset;if (asset)postProcessParams.requestHDRFormat = UniversalRenderPipeline.MakeRenderTextureGraphicsFormat(asset.supportsHDR, asset.hdrColorBufferPrecision, false);m_PostProcessPasses = new PostProcessPasses(data.postProcessData, ref postProcessParams);}m_CapturePass = new CapturePass(RenderPassEvent.AfterRendering);m_FinalBlitPass = new FinalBlitPass(RenderPassEvent.AfterRendering + k_FinalBlitPassQueueOffset, m_BlitMaterial, m_BlitHDRMaterial);#if UNITY_EDITORm_FinalDepthCopyPass = new CopyDepthPass(RenderPassEvent.AfterRendering + 9, m_CopyDepthMaterial);
#endif// RenderTexture format depends on camera and pipeline (HDR, non HDR, etc)// Samples (MSAA) depend on camera and pipelinem_ColorBufferSystem = new RenderTargetBufferSystem("_CameraColorAttachment");supportedRenderingFeatures = new RenderingFeatures();if (this.renderingModeRequested == RenderingMode.Deferred){// Deferred rendering does not support MSAA.this.supportedRenderingFeatures.msaa = false;// Avoid legacy platforms: use vulkan instead.unsupportedGraphicsDeviceTypes = new GraphicsDeviceType[]{GraphicsDeviceType.OpenGLCore,GraphicsDeviceType.OpenGLES2,GraphicsDeviceType.OpenGLES3};}LensFlareCommonSRP.mergeNeeded = 0;LensFlareCommonSRP.maxLensFlareWithOcclusionTemporalSample = 1;LensFlareCommonSRP.Initialize();m_VulkanEnablePreTransform = GraphicsSettings.HasShaderDefine(BuiltinShaderDefine.UNITY_PRETRANSFORM_TO_DISPLAY_ORIENTATION);}/// <inheritdoc />protected override void Dispose(bool disposing){m_ForwardLights.Cleanup();m_GBufferPass?.Dispose();m_PostProcessPasses.Dispose();m_FinalBlitPass?.Dispose();m_DrawOffscreenUIPass?.Dispose();m_DrawOverlayUIPass?.Dispose();m_XRTargetHandleAlias?.Release();ReleaseRenderTargets();base.Dispose(disposing);CoreUtils.Destroy(m_BlitMaterial);CoreUtils.Destroy(m_BlitHDRMaterial);CoreUtils.Destroy(m_CopyDepthMaterial);CoreUtils.Destroy(m_SamplingMaterial);CoreUtils.Destroy(m_StencilDeferredMaterial);CoreUtils.Destroy(m_CameraMotionVecMaterial);CoreUtils.Destroy(m_ObjectMotionVecMaterial);CleanupRenderGraphResources();LensFlareCommonSRP.Dispose();}internal override void ReleaseRenderTargets(){m_ColorBufferSystem.Dispose();if (m_DeferredLights != null && !m_DeferredLights.UseRenderPass)m_GBufferPass?.Dispose();m_PostProcessPasses.ReleaseRenderTargets();m_MainLightShadowCasterPass?.Dispose();m_AdditionalLightsShadowCasterPass?.Dispose();m_CameraDepthAttachment?.Release();m_DepthTexture?.Release();m_NormalsTexture?.Release();m_DecalLayersTexture?.Release();m_OpaqueColor?.Release();m_MotionVectorColor?.Release();m_MotionVectorDepth?.Release();hasReleasedRTs = true;}private void SetupFinalPassDebug(ref CameraData cameraData){if ((DebugHandler != null) && DebugHandler.IsActiveForCamera(ref cameraData)){if (DebugHandler.TryGetFullscreenDebugMode(out DebugFullScreenMode fullScreenDebugMode, out int textureHeightPercent) &&(fullScreenDebugMode != DebugFullScreenMode.ReflectionProbeAtlas || m_Clustering)){Camera camera = cameraData.camera;float screenWidth = camera.pixelWidth;float screenHeight = camera.pixelHeight;var relativeSize = Mathf.Clamp01(textureHeightPercent / 100f);var height = relativeSize * screenHeight;var width = relativeSize * screenWidth;if (fullScreenDebugMode == DebugFullScreenMode.ReflectionProbeAtlas){// Ensure that atlas is not stretched, but doesn't take up more than the percentage in any dimension.var texture = m_ForwardLights.reflectionProbeManager.atlasRT;var targetWidth = height * texture.width / texture.height;if (targetWidth > width){height = width * texture.height / texture.width;}else{width = targetWidth;}}float normalizedSizeX = width / screenWidth;float normalizedSizeY = height / screenHeight;Rect normalizedRect = new Rect(1 - normalizedSizeX, 1 - normalizedSizeY, normalizedSizeX, normalizedSizeY);switch (fullScreenDebugMode){case DebugFullScreenMode.Depth:{DebugHandler.SetDebugRenderTarget(m_DepthTexture.nameID, normalizedRect, true);break;}case DebugFullScreenMode.AdditionalLightsShadowMap:{DebugHandler.SetDebugRenderTarget(m_AdditionalLightsShadowCasterPass.m_AdditionalLightsShadowmapHandle, normalizedRect, false);break;}case DebugFullScreenMode.MainLightShadowMap:{DebugHandler.SetDebugRenderTarget(m_MainLightShadowCasterPass.m_MainLightShadowmapTexture, normalizedRect, false);break;}case DebugFullScreenMode.ReflectionProbeAtlas:{DebugHandler.SetDebugRenderTarget(m_ForwardLights.reflectionProbeManager.atlasRT, normalizedRect, false);break;}default:{break;}}}else{DebugHandler.ResetDebugRenderTarget();}}}/// <summary>/// Returns if the camera renders to a offscreen depth texture./// </summary>/// <param name="cameraData">The camera data for the camera being rendered.</param>/// <returns>Returns true if the camera renders to depth without any color buffer. It will return false otherwise.</returns>public static bool IsOffscreenDepthTexture(in CameraData cameraData) => cameraData.targetTexture != null && cameraData.targetTexture.format == RenderTextureFormat.Depth;bool IsDepthPrimingEnabled(ref CameraData cameraData){// depth priming requires an extra depth copy, disable it on platforms not supporting it (like GLES when MSAA is on)if (!CanCopyDepth(ref cameraData))return false;// Depth Priming causes rendering errors with WebGL on Apple Arm64 GPUs.bool isNotWebGL = !IsWebGL();bool depthPrimingRequested = (m_DepthPrimingRecommended && m_DepthPrimingMode == DepthPrimingMode.Auto) || m_DepthPrimingMode == DepthPrimingMode.Forced;bool isForwardRenderingMode = m_RenderingMode == RenderingMode.Forward || m_RenderingMode == RenderingMode.ForwardPlus;bool isFirstCameraToWriteDepth = cameraData.renderType == CameraRenderType.Base || cameraData.clearDepth;// Enabled Depth priming when baking Reflection Probes causes artefacts (UUM-12397)bool isNotReflectionCamera = cameraData.cameraType != CameraType.Reflection;// Depth is not rendered in a depth-only camera setup with depth priming (UUM-38158)bool isNotOffscreenDepthTexture = !IsOffscreenDepthTexture(cameraData);return depthPrimingRequested && isForwardRenderingMode && isFirstCameraToWriteDepth && isNotReflectionCamera && isNotOffscreenDepthTexture && isNotWebGL;}bool IsWebGL(){
#if PLATFORM_WEBGLreturn IsGLESDevice();
#elsereturn false;
#endif}bool IsGLESDevice(){return SystemInfo.graphicsDeviceType == GraphicsDeviceType.OpenGLES2 || SystemInfo.graphicsDeviceType == GraphicsDeviceType.OpenGLES3;}bool IsGLDevice(){return IsGLESDevice() || SystemInfo.graphicsDeviceType == GraphicsDeviceType.OpenGLCore;}/// <inheritdoc />public override void Setup(ScriptableRenderContext context, ref RenderingData renderingData){m_ForwardLights.PreSetup(ref renderingData);ref CameraData cameraData = ref renderingData.cameraData;Camera camera = cameraData.camera;RenderTextureDescriptor cameraTargetDescriptor = cameraData.cameraTargetDescriptor;var cmd = renderingData.commandBuffer;if (DebugHandler != null){DebugHandler.Setup(context, ref renderingData);if (DebugHandler.IsActiveForCamera(ref cameraData)){if (DebugHandler.WriteToDebugScreenTexture(ref cameraData)){RenderTextureDescriptor colorDesc = cameraData.cameraTargetDescriptor;DebugHandler.ConfigureColorDescriptorForDebugScreen(ref colorDesc, cameraData.pixelWidth, cameraData.pixelHeight);RenderingUtils.ReAllocateIfNeeded(ref DebugHandler.DebugScreenColorHandle, colorDesc, name: "_DebugScreenColor");RenderTextureDescriptor depthDesc = cameraData.cameraTargetDescriptor;DebugHandler.ConfigureDepthDescriptorForDebugScreen(ref depthDesc, k_DepthBufferBits, cameraData.pixelWidth, cameraData.pixelHeight);RenderingUtils.ReAllocateIfNeeded(ref DebugHandler.DebugScreenDepthHandle, depthDesc, name: "_DebugScreenDepth");}if (DebugHandler.HDRDebugViewIsActive(ref cameraData)){DebugHandler.hdrDebugViewPass.Setup(ref cameraData, DebugHandler.DebugDisplaySettings.lightingSettings.hdrDebugMode);EnqueuePass(DebugHandler.hdrDebugViewPass);}}}if (cameraData.cameraType != CameraType.Game)useRenderPassEnabled = false;// Because of the shortcutting done by depth only offscreen cameras, useDepthPriming must be computed earlyuseDepthPriming = IsDepthPrimingEnabled(ref cameraData);// Special path for depth only offscreen cameras. Only write opaques + transparents.if (IsOffscreenDepthTexture(in cameraData)){ConfigureCameraTarget(k_CameraTarget, k_CameraTarget);SetupRenderPasses(in renderingData);EnqueuePass(m_RenderOpaqueForwardPass);#if ADAPTIVE_PERFORMANCE_2_1_0_OR_NEWERif (!needTransparencyPass)return;
#endifEnqueuePass(m_RenderTransparentForwardPass);return;}// Assign the camera color target early in case it is needed during AddRenderPasses.bool isPreviewCamera = cameraData.isPreviewCamera;var createColorTexture = ((rendererFeatures.Count != 0 && m_IntermediateTextureMode == IntermediateTextureMode.Always) && !isPreviewCamera) ||(Application.isEditor && m_Clustering);// Gather render passe input requirementsRenderPassInputSummary renderPassInputs = GetRenderPassInputs(ref renderingData);// Gather render pass require rendering layers event and mask sizebool requiresRenderingLayer = RenderingLayerUtils.RequireRenderingLayers(this, rendererFeatures,cameraTargetDescriptor.msaaSamples,out var renderingLayersEvent, out var renderingLayerMaskSize);// All passes that use write to rendering layers are excluded from gl// So we disable it to avoid setting multiple render targetsif (IsGLDevice())requiresRenderingLayer = false;bool renderingLayerProvidesByDepthNormalPass = false;bool renderingLayerProvidesRenderObjectPass = false;if (requiresRenderingLayer && renderingModeActual != RenderingMode.Deferred){switch (renderingLayersEvent){case RenderingLayerUtils.Event.DepthNormalPrePass:renderingLayerProvidesByDepthNormalPass = true;break;case RenderingLayerUtils.Event.Opaque:renderingLayerProvidesRenderObjectPass = true;break;default:throw new ArgumentOutOfRangeException();}}// Enable depth normal prepassif (renderingLayerProvidesByDepthNormalPass)renderPassInputs.requiresNormalsTexture = true;// TODO: investigate the order of call, had to change because of requiresRenderingLayerif (m_DeferredLights != null){m_DeferredLights.RenderingLayerMaskSize = renderingLayerMaskSize;m_DeferredLights.UseDecalLayers = requiresRenderingLayer;// TODO: This needs to be setup early, otherwise gbuffer attachments will be allocated with wrong sizem_DeferredLights.HasNormalPrepass = renderPassInputs.requiresNormalsTexture;m_DeferredLights.ResolveMixedLightingMode(ref renderingData);m_DeferredLights.IsOverlay = cameraData.renderType == CameraRenderType.Overlay;if (m_DeferredLights.UseRenderPass){// At this point we only have injected renderer features in the queue and can do assumptions on whether we'll need Framebuffer Fetchforeach (var pass in activeRenderPassQueue){if (pass.renderPassEvent >= RenderPassEvent.AfterRenderingGbuffer &&pass.renderPassEvent <= RenderPassEvent.BeforeRenderingDeferredLights){m_DeferredLights.DisableFramebufferFetchInput();break;}}}}// Should apply post-processing after rendering this camera?bool applyPostProcessing = cameraData.postProcessEnabled && m_PostProcessPasses.isCreated;// There's at least a camera in the camera stack that applies post-processingbool anyPostProcessing = renderingData.postProcessingEnabled && m_PostProcessPasses.isCreated;// If Camera's PostProcessing is enabled and if there any enabled PostProcessing requires depth texture as shader read resource (Motion Blur/DoF)bool cameraHasPostProcessingWithDepth = applyPostProcessing && cameraData.postProcessingRequiresDepthTexture;// TODO: We could cache and generate the LUT before rendering the stackbool generateColorGradingLUT = cameraData.postProcessEnabled && m_PostProcessPasses.isCreated;bool isSceneViewOrPreviewCamera = cameraData.isSceneViewCamera || cameraData.isPreviewCamera;// This indicates whether the renderer will output a depth texture.bool requiresDepthTexture = cameraData.requiresDepthTexture || renderPassInputs.requiresDepthTexture || m_DepthPrimingMode == DepthPrimingMode.Forced;#if UNITY_EDITORbool isGizmosEnabled = UnityEditor.Handles.ShouldRenderGizmos();
#elsebool isGizmosEnabled = false;
#endifbool mainLightShadows = m_MainLightShadowCasterPass.Setup(ref renderingData);bool additionalLightShadows = m_AdditionalLightsShadowCasterPass.Setup(ref renderingData);bool transparentsNeedSettingsPass = m_TransparentSettingsPass.Setup();bool forcePrepass = (m_CopyDepthMode == CopyDepthMode.ForcePrepass);// Depth prepass is generated in the following cases:// - If game or offscreen camera requires it we check if we can copy the depth from the rendering opaques pass and use that instead.// - Scene or preview cameras always require a depth texture. We do a depth pre-pass to simplify it and it shouldn't matter much for editor.// - Render passes require itbool requiresDepthPrepass = (requiresDepthTexture || cameraHasPostProcessingWithDepth) && (!CanCopyDepth(ref renderingData.cameraData) || forcePrepass);requiresDepthPrepass |= isSceneViewOrPreviewCamera;requiresDepthPrepass |= isGizmosEnabled;requiresDepthPrepass |= isPreviewCamera;requiresDepthPrepass |= renderPassInputs.requiresDepthPrepass;requiresDepthPrepass |= renderPassInputs.requiresNormalsTexture;// Current aim of depth prepass is to generate a copy of depth buffer, it is NOT to prime depth buffer and reduce overdraw on non-mobile platforms.// When deferred renderer is enabled, depth buffer is already accessible so depth prepass is not needed.// The only exception is for generating depth-normal textures: SSAO pass needs it and it must run before forward-only geometry.// DepthNormal prepass will render:// - forward-only geometry when deferred renderer is enabled// - all geometry when forward renderer is enabledif (requiresDepthPrepass && this.renderingModeActual == RenderingMode.Deferred && !renderPassInputs.requiresNormalsTexture)requiresDepthPrepass = false;requiresDepthPrepass |= useDepthPriming;// If possible try to merge the opaque and skybox passes instead of splitting them when "Depth Texture" is required.// The copying of depth should normally happen after rendering opaques.// But if we only require it for post processing or the scene camera then we do it after rendering transparent objects// Aim to have the most optimized render pass event for Depth Copy (The aim is to minimize the number of render passes)if (requiresDepthTexture){bool copyDepthAfterTransparents = m_CopyDepthMode == CopyDepthMode.AfterTransparents;RenderPassEvent copyDepthPassEvent = copyDepthAfterTransparents ? RenderPassEvent.AfterRenderingTransparents : RenderPassEvent.AfterRenderingOpaques;// RenderPassInputs's requiresDepthTexture is configured through ScriptableRenderPass's ConfigureInput functionif (renderPassInputs.requiresDepthTexture){// Do depth copy before the render pass that requires depth texture as shader read resourcecopyDepthPassEvent = (RenderPassEvent)Mathf.Min((int)RenderPassEvent.AfterRenderingTransparents, ((int)renderPassInputs.requiresDepthTextureEarliestEvent) - 1);}m_CopyDepthPass.renderPassEvent = copyDepthPassEvent;}else if (cameraHasPostProcessingWithDepth || isSceneViewOrPreviewCamera || isGizmosEnabled){// If only post process requires depth texture, we can re-use depth buffer from main geometry pass instead of enqueuing a depth copy pass, but no proper API to do that for now, so resort to depth copy pass for nowm_CopyDepthPass.renderPassEvent = RenderPassEvent.AfterRenderingTransparents;}createColorTexture |= RequiresIntermediateColorTexture(ref cameraData);createColorTexture |= renderPassInputs.requiresColorTexture;createColorTexture |= renderPassInputs.requiresColorTextureCreated;createColorTexture &= !isPreviewCamera;// If camera requires depth and there's no depth pre-pass we create a depth texture that can be read later by effect requiring it.// When deferred renderer is enabled, we must always create a depth texture and CANNOT use BuiltinRenderTextureType.CameraTarget. This is to get// around a bug where during gbuffer pass (MRT pass), the camera depth attachment is correctly bound, but during// deferred pass ("camera color" + "camera depth"), the implicit depth surface of "camera color" is used instead of "camera depth",// because BuiltinRenderTextureType.CameraTarget for depth means there is no explicit depth attachment...bool createDepthTexture = (requiresDepthTexture || cameraHasPostProcessingWithDepth) && !requiresDepthPrepass;createDepthTexture |= !cameraData.resolveFinalTarget;// Deferred renderer always need to access depth buffer.createDepthTexture |= (this.renderingModeActual == RenderingMode.Deferred && !useRenderPassEnabled);// Some render cases (e.g. Material previews) have shown we need to create a depth texture when we're forcing a prepass.createDepthTexture |= useDepthPriming;// Todo seems like with mrt depth is not taken from first targetcreateDepthTexture |= (renderingLayerProvidesRenderObjectPass);#if ENABLE_VR && ENABLE_XR_MODULE// URP can't handle msaa/size mismatch between depth RT and color RT(for now we create intermediate textures to ensure they match)if (cameraData.xr.enabled)createColorTexture |= createDepthTexture;
#endif
#if UNITY_ANDROID || UNITY_WEBGL// GLES can not use render texture's depth buffer with the color buffer of the backbuffer// in such case we create a color texture for it too.// If Vulkan PreTransform is enabled we can't mix backbuffer and intermediate render target due to screen orientation mismatchif (SystemInfo.graphicsDeviceType != GraphicsDeviceType.Vulkan || m_VulkanEnablePreTransform)createColorTexture |= createDepthTexture;
#endif// If there is any scaling, the color and depth need to be the same resolution and the target texture// will not be the proper size in this case. Same happens with GameView.// This introduces the final blit pass.if (RTHandles.rtHandleProperties.rtHandleScale.x != 1.0f || RTHandles.rtHandleProperties.rtHandleScale.y != 1.0f)createColorTexture |= createDepthTexture;if (useRenderPassEnabled || useDepthPriming)createColorTexture |= createDepthTexture;//Set rt descriptors so preview camera's have access should it be neededvar colorDescriptor = cameraTargetDescriptor;colorDescriptor.useMipMap = false;colorDescriptor.autoGenerateMips = false;colorDescriptor.depthBufferBits = (int)DepthBits.None;m_ColorBufferSystem.SetCameraSettings(colorDescriptor, FilterMode.Bilinear);bool clearInPostProcess = false;// Configure all settings require to start a new camera stack (base camera only)if (cameraData.renderType == CameraRenderType.Base){// Scene filtering redraws the objects on top of the resulting frame. It has to draw directly to the sceneview buffer.bool sceneViewFilterEnabled = camera.sceneViewFilterMode == Camera.SceneViewFilterMode.ShowFiltered;bool intermediateRenderTexture = (createColorTexture || createDepthTexture) && !sceneViewFilterEnabled;// RTHandles do not support combining color and depth in the same texture so we create them separately// Should be independent from filtered scene view//清除开关逻辑if (cameraData.targetTexture != null)createDepthTexture |= createColorTexture;else{if (VolumeManager.instance.GetVolumes(renderingData.cameraData.volumeLayerMask).Length <= 0)clearInPostProcess = true;else{Volume currentVolume = VolumeManager.instance.GetVolumes(renderingData.cameraData.volumeLayerMask)[0];if (currentVolume == null || !currentVolume.isActiveAndEnabled || !currentVolume.isGlobal || !currentVolume.gameObject.activeInHierarchy)clearInPostProcess = true;else{var stack = VolumeManager.instance.stack;var components = VolumeManager.instance.baseComponentTypeArray.Where(t => t.GetInterface(nameof(IPostProcessComponent)) != null && stack.GetComponent(t) != null).Select(t => stack.GetComponent(t) as IPostProcessComponent).ToList();if (components == null || components.Count == 0)clearInPostProcess = true;else{var active_components = new List<IPostProcessComponent>();foreach (var item in components){if (item.IsActive())active_components.Add(item);}if (active_components.Count == 0)clearInPostProcess = true;else{if (renderingData.cameraData.requiresDepthTexture || renderingData.cameraData.postProcessingRequiresDepthTexture)createDepthTexture |= createColorTexture;}}}}}RenderTargetIdentifier targetId = BuiltinRenderTextureType.CameraTarget;
#if ENABLE_VR && ENABLE_XR_MODULEif (cameraData.xr.enabled)targetId = cameraData.xr.renderTarget;
#endifif (m_XRTargetHandleAlias == null){m_XRTargetHandleAlias = RTHandles.Alloc(targetId);}else if (m_XRTargetHandleAlias.nameID != targetId){RTHandleStaticHelpers.SetRTHandleUserManagedWrapper(ref m_XRTargetHandleAlias, targetId);}// Doesn't create texture for Overlay cameras as they are already overlaying on top of created textures.if (intermediateRenderTexture)CreateCameraRenderTarget(context, ref cameraTargetDescriptor, useDepthPriming, cmd, ref cameraData);m_RenderOpaqueForwardPass.m_IsActiveTargetBackBuffer = !intermediateRenderTexture;m_RenderTransparentForwardPass.m_IsActiveTargetBackBuffer = !intermediateRenderTexture;m_DrawSkyboxPass.m_IsActiveTargetBackBuffer = !intermediateRenderTexture;
#if ENABLE_VR && ENABLE_XR_MODULEm_XROcclusionMeshPass.m_IsActiveTargetBackBuffer = !intermediateRenderTexture;
#endifm_ActiveCameraColorAttachment = createColorTexture ? m_ColorBufferSystem.PeekBackBuffer() : m_XRTargetHandleAlias;m_ActiveCameraDepthAttachment = createDepthTexture ? m_CameraDepthAttachment : m_XRTargetHandleAlias;}else{cameraData.baseCamera.TryGetComponent<UniversalAdditionalCameraData>(out var baseCameraData);var baseRenderer = (UniversalRenderer)baseCameraData.scriptableRenderer;if (m_ColorBufferSystem != baseRenderer.m_ColorBufferSystem){m_ColorBufferSystem.Dispose();m_ColorBufferSystem = baseRenderer.m_ColorBufferSystem;}m_ActiveCameraColorAttachment = m_ColorBufferSystem.PeekBackBuffer();m_ActiveCameraDepthAttachment = baseRenderer.m_ActiveCameraDepthAttachment;m_XRTargetHandleAlias = baseRenderer.m_XRTargetHandleAlias;}if (rendererFeatures.Count != 0 && !isPreviewCamera)ConfigureCameraColorTarget(m_ColorBufferSystem.PeekBackBuffer());bool copyColorPass = renderingData.cameraData.requiresOpaqueTexture || renderPassInputs.requiresColorTexture;// Check the createColorTexture logic above: intermediate color texture is not available for preview cameras.// Because intermediate color is not available and copyColor pass requires it, we disable CopyColor pass here.copyColorPass &= !isPreviewCamera;// Assign camera targets (color and depth)ConfigureCameraTarget(m_ActiveCameraColorAttachment, m_ActiveCameraDepthAttachment);bool hasPassesAfterPostProcessing = activeRenderPassQueue.Find(x => x.renderPassEvent == RenderPassEvent.AfterRenderingPostProcessing) != null;if (mainLightShadows)EnqueuePass(m_MainLightShadowCasterPass);if (additionalLightShadows)EnqueuePass(m_AdditionalLightsShadowCasterPass);bool requiresDepthCopyPass = !requiresDepthPrepass&& (renderingData.cameraData.requiresDepthTexture || cameraHasPostProcessingWithDepth || renderPassInputs.requiresDepthTexture)&& createDepthTexture;//清除临时RTif (!requiresDepthTexture && !createDepthTexture && !requiresDepthCopyPass && !requiresDepthPrepass){m_CameraDepthAttachment?.Release();m_ActiveCameraDepthAttachment?.Release();m_DepthTexture?.Release();}if (!createColorTexture){m_ActiveCameraColorAttachment?.Release();m_ColorBufferSystem.Dispose();}if (!copyColorPass){m_OpaqueColor?.Release();}if ((DebugHandler != null) && DebugHandler.IsActiveForCamera(ref cameraData)){DebugHandler.TryGetFullscreenDebugMode(out var fullScreenMode);if (fullScreenMode == DebugFullScreenMode.Depth){requiresDepthPrepass = true;}if (!DebugHandler.IsLightingActive){mainLightShadows = false;additionalLightShadows = false;if (!isSceneViewOrPreviewCamera){requiresDepthPrepass = false;useDepthPriming = false;generateColorGradingLUT = false;copyColorPass = false;requiresDepthCopyPass = false;}}if (useRenderPassEnabled)useRenderPassEnabled = DebugHandler.IsRenderPassSupported;}cameraData.renderer.useDepthPriming = useDepthPriming;if (this.renderingModeActual == RenderingMode.Deferred){if (m_DeferredLights.UseRenderPass && (RenderPassEvent.AfterRenderingGbuffer == renderPassInputs.requiresDepthNormalAtEvent || !useRenderPassEnabled))m_DeferredLights.DisableFramebufferFetchInput();}// Allocate m_DepthTexture if usedif ((this.renderingModeActual == RenderingMode.Deferred && !this.useRenderPassEnabled) || requiresDepthPrepass || requiresDepthCopyPass){var depthDescriptor = cameraTargetDescriptor;if ((requiresDepthPrepass && this.renderingModeActual != RenderingMode.Deferred) || !RenderingUtils.SupportsGraphicsFormat(GraphicsFormat.R32_SFloat, FormatUsage.Render)){// Debug.Log("Pass 1");depthDescriptor.graphicsFormat = GraphicsFormat.None;depthDescriptor.depthStencilFormat = k_DepthStencilFormat;depthDescriptor.depthBufferBits = k_DepthBufferBits;
#if UNITY_IOS || UNITY_ANDROID || UNITY_EDITORdepthDescriptor.width = (int)(depthDescriptor.width * 0.5);depthDescriptor.height = (int)(depthDescriptor.height * 0.5);
#endif}else{//修改CameraDepth精度和大小
#if UNITY_IOS || UNITY_ANDROID || UNITY_EDITORif (RenderingUtils.SupportsGraphicsFormat(GraphicsFormat.R16_SNorm, FormatUsage.Render))depthDescriptor.graphicsFormat = GraphicsFormat.R16_SNorm;else if (RenderingUtils.SupportsGraphicsFormat(GraphicsFormat.R16_SFloat, FormatUsage.Render))depthDescriptor.graphicsFormat = GraphicsFormat.R16_SFloat;else if (RenderingUtils.SupportsGraphicsFormat(GraphicsFormat.R16_UNorm, FormatUsage.Render))depthDescriptor.graphicsFormat = GraphicsFormat.R16_UNorm;elsedepthDescriptor.graphicsFormat = GraphicsFormat.R32_SFloat;depthDescriptor.width = (int)(depthDescriptor.width * 0.5);depthDescriptor.height = (int)(depthDescriptor.height * 0.5);
#elsedepthDescriptor.graphicsFormat = GraphicsFormat.R32_SFloat;
#endifdepthDescriptor.depthStencilFormat = GraphicsFormat.None;depthDescriptor.depthBufferBits = 0;}// Debug.Log("Pass 2" + depthDescriptor.graphicsFormat + " " + depthDescriptor.width + "  " + depthDescriptor.height);depthDescriptor.msaaSamples = 1;// Depth-Only pass don't use MSAARenderingUtils.ReAllocateIfNeeded(ref m_DepthTexture, depthDescriptor, FilterMode.Point, wrapMode: TextureWrapMode.Clamp, name: "_CameraDepthTexture");cmd.SetGlobalTexture(m_DepthTexture.name, m_DepthTexture.nameID);context.ExecuteCommandBuffer(cmd);cmd.Clear();}if (requiresRenderingLayer || (renderingModeActual == RenderingMode.Deferred && m_DeferredLights.UseRenderingLayers)){ref var renderingLayersTexture = ref m_DecalLayersTexture;string renderingLayersTextureName = "_CameraRenderingLayersTexture";if (this.renderingModeActual == RenderingMode.Deferred && m_DeferredLights.UseRenderingLayers){renderingLayersTexture = ref m_DeferredLights.GbufferAttachments[(int)m_DeferredLights.GBufferRenderingLayers];renderingLayersTextureName = renderingLayersTexture.name;}var renderingLayersDescriptor = cameraTargetDescriptor;renderingLayersDescriptor.depthBufferBits = 0;// Never have MSAA on this depth texture. When doing MSAA depth priming this is the texture that is resolved to and used for post-processing.if (!renderingLayerProvidesRenderObjectPass)renderingLayersDescriptor.msaaSamples = 1;// Depth-Only pass don't use MSAA// Find compatible render-target format for storing normals.// Shader code outputs normals in signed format to be compatible with deferred gbuffer layout.// Deferred gbuffer format is signed so that normals can be blended for terrain geometry.if (this.renderingModeActual == RenderingMode.Deferred && m_DeferredLights.UseRenderingLayers)renderingLayersDescriptor.graphicsFormat = m_DeferredLights.GetGBufferFormat(m_DeferredLights.GBufferRenderingLayers); // the one used by the gbuffer.elserenderingLayersDescriptor.graphicsFormat = RenderingLayerUtils.GetFormat(renderingLayerMaskSize);if (renderingModeActual == RenderingMode.Deferred && m_DeferredLights.UseRenderingLayers){m_DeferredLights.ReAllocateGBufferIfNeeded(renderingLayersDescriptor, (int)m_DeferredLights.GBufferRenderingLayers);}else{RenderingUtils.ReAllocateIfNeeded(ref renderingLayersTexture, renderingLayersDescriptor, FilterMode.Point, TextureWrapMode.Clamp, name: renderingLayersTextureName);}cmd.SetGlobalTexture(renderingLayersTexture.name, renderingLayersTexture.nameID);RenderingLayerUtils.SetupProperties(cmd, renderingLayerMaskSize);if (this.renderingModeActual == RenderingMode.Deferred) // As this is requested by render pass we still want to set itcmd.SetGlobalTexture("_CameraRenderingLayersTexture", renderingLayersTexture.nameID);context.ExecuteCommandBuffer(cmd);cmd.Clear();}// Allocate normal texture if usedif (requiresDepthPrepass && renderPassInputs.requiresNormalsTexture){ref var normalsTexture = ref m_NormalsTexture;string normalsTextureName = "_CameraNormalsTexture";if (this.renderingModeActual == RenderingMode.Deferred){normalsTexture = ref m_DeferredLights.GbufferAttachments[(int)m_DeferredLights.GBufferNormalSmoothnessIndex];normalsTextureName = normalsTexture.name;}var normalDescriptor = cameraTargetDescriptor;normalDescriptor.depthBufferBits = 0;// Never have MSAA on this depth texture. When doing MSAA depth priming this is the texture that is resolved to and used for post-processing.normalDescriptor.msaaSamples = useDepthPriming ? cameraTargetDescriptor.msaaSamples : 1;// Depth-Only passes don't use MSAA, unless depth priming is enabled// Find compatible render-target format for storing normals.// Shader code outputs normals in signed format to be compatible with deferred gbuffer layout.// Deferred gbuffer format is signed so that normals can be blended for terrain geometry.if (this.renderingModeActual == RenderingMode.Deferred)normalDescriptor.graphicsFormat = m_DeferredLights.GetGBufferFormat(m_DeferredLights.GBufferNormalSmoothnessIndex); // the one used by the gbuffer.elsenormalDescriptor.graphicsFormat = DepthNormalOnlyPass.GetGraphicsFormat();if (this.renderingModeActual == RenderingMode.Deferred){m_DeferredLights.ReAllocateGBufferIfNeeded(normalDescriptor, (int)m_DeferredLights.GBufferNormalSmoothnessIndex);}else{RenderingUtils.ReAllocateIfNeeded(ref normalsTexture, normalDescriptor, FilterMode.Point, TextureWrapMode.Clamp, name: normalsTextureName);}cmd.SetGlobalTexture(normalsTexture.name, normalsTexture.nameID);if (this.renderingModeActual == RenderingMode.Deferred) // As this is requested by render pass we still want to set itcmd.SetGlobalTexture("_CameraNormalsTexture", normalsTexture.nameID);context.ExecuteCommandBuffer(cmd);cmd.Clear();}if (requiresDepthPrepass){if (renderPassInputs.requiresNormalsTexture){if (this.renderingModeActual == RenderingMode.Deferred){// In deferred mode, depth-normal prepass does really primes the depth and normal buffers, instead of creating a copy.// It is necessary because we need to render depth&normal for forward-only geometry and it is the only way// to get them before the SSAO pass.int gbufferNormalIndex = m_DeferredLights.GBufferNormalSmoothnessIndex;if (m_DeferredLights.UseRenderingLayers)m_DepthNormalPrepass.Setup(m_ActiveCameraDepthAttachment, m_DeferredLights.GbufferAttachments[gbufferNormalIndex], m_DeferredLights.GbufferAttachments[m_DeferredLights.GBufferRenderingLayers]);else if (renderingLayerProvidesByDepthNormalPass)m_DepthNormalPrepass.Setup(m_ActiveCameraDepthAttachment, m_DeferredLights.GbufferAttachments[gbufferNormalIndex], m_DecalLayersTexture);elsem_DepthNormalPrepass.Setup(m_ActiveCameraDepthAttachment, m_DeferredLights.GbufferAttachments[gbufferNormalIndex]);// Only render forward-only geometry, as standard geometry will be rendered as normal into the gbuffer.if (RenderPassEvent.AfterRenderingGbuffer <= renderPassInputs.requiresDepthNormalAtEvent &&renderPassInputs.requiresDepthNormalAtEvent <= RenderPassEvent.BeforeRenderingOpaques)m_DepthNormalPrepass.shaderTagIds = k_DepthNormalsOnly;}else{if (renderingLayerProvidesByDepthNormalPass)m_DepthNormalPrepass.Setup(m_DepthTexture, m_NormalsTexture, m_DecalLayersTexture);elsem_DepthNormalPrepass.Setup(m_DepthTexture, m_NormalsTexture);}EnqueuePass(m_DepthNormalPrepass);}else{// Deferred renderer does not require a depth-prepass to generate samplable depth texture.if (this.renderingModeActual != RenderingMode.Deferred){m_DepthPrepass.Setup(cameraTargetDescriptor, m_DepthTexture);EnqueuePass(m_DepthPrepass);}}}// depth priming still needs to copy depth because the prepass doesn't target anymore CameraDepthTexture// TODO: this is unoptimal, investigate optimizationsif (useDepthPriming){m_PrimedDepthCopyPass.Setup(m_ActiveCameraDepthAttachment, m_DepthTexture);EnqueuePass(m_PrimedDepthCopyPass);}if (generateColorGradingLUT){colorGradingLutPass.ConfigureDescriptor(in renderingData.postProcessingData, out var desc, out var filterMode);RenderingUtils.ReAllocateIfNeeded(ref m_PostProcessPasses.m_ColorGradingLut, desc, filterMode, TextureWrapMode.Clamp, anisoLevel: 0, name: "_InternalGradingLut");colorGradingLutPass.Setup(colorGradingLut);EnqueuePass(colorGradingLutPass);}#if ENABLE_VR && ENABLE_XR_MODULEif (cameraData.xr.hasValidOcclusionMesh)EnqueuePass(m_XROcclusionMeshPass);
#endifbool lastCameraInTheStack = cameraData.resolveFinalTarget;if (this.renderingModeActual == RenderingMode.Deferred){if (m_DeferredLights.UseRenderPass && (RenderPassEvent.AfterRenderingGbuffer == renderPassInputs.requiresDepthNormalAtEvent || !useRenderPassEnabled))m_DeferredLights.DisableFramebufferFetchInput();EnqueueDeferred(ref renderingData, requiresDepthPrepass, renderPassInputs.requiresNormalsTexture, renderingLayerProvidesByDepthNormalPass, mainLightShadows, additionalLightShadows);}else{// Optimized store actions are very important on tile based GPUs and have a great impact on performance.// if MSAA is enabled and any of the following passes need a copy of the color or depth target, make sure the MSAA'd surface is stored// if following passes won't use it then just resolve (the Resolve action will still store the resolved surface, but discard the MSAA'd surface, which is very expensive to store).RenderBufferStoreAction opaquePassColorStoreAction = RenderBufferStoreAction.Store;if (cameraTargetDescriptor.msaaSamples > 1)opaquePassColorStoreAction = copyColorPass ? RenderBufferStoreAction.StoreAndResolve : RenderBufferStoreAction.Store;// make sure we store the depth only if following passes need it.RenderBufferStoreAction opaquePassDepthStoreAction = (copyColorPass || requiresDepthCopyPass || !lastCameraInTheStack) ? RenderBufferStoreAction.Store : RenderBufferStoreAction.DontCare;
#if ENABLE_VR && ENABLE_XR_MODULEif (cameraData.xr.enabled && cameraData.xr.copyDepth){opaquePassDepthStoreAction = RenderBufferStoreAction.Store;}
#endif// handle multisample depth resolve by setting the appropriate store actions if supportedif (requiresDepthCopyPass && cameraTargetDescriptor.msaaSamples > 1 && RenderingUtils.MultisampleDepthResolveSupported()){bool isCopyDepthAfterTransparent = m_CopyDepthPass.renderPassEvent == RenderPassEvent.AfterRenderingTransparents;// we could StoreAndResolve when the depth copy is after opaque, but performance wise doing StoreAndResolve of depth targets is more expensive than a simple Store + following depth copy pass on Apple GPUs,// because of the extra resolve step. So, unless we are copying the depth after the transparent pass, just Store the depth target.if (isCopyDepthAfterTransparent && !copyColorPass){if (opaquePassDepthStoreAction == RenderBufferStoreAction.Store)opaquePassDepthStoreAction = RenderBufferStoreAction.StoreAndResolve;else if (opaquePassDepthStoreAction == RenderBufferStoreAction.DontCare)opaquePassDepthStoreAction = RenderBufferStoreAction.Resolve;}}DrawObjectsPass renderOpaqueForwardPass = null;if (renderingLayerProvidesRenderObjectPass){renderOpaqueForwardPass = m_RenderOpaqueForwardWithRenderingLayersPass;m_RenderOpaqueForwardWithRenderingLayersPass.Setup(m_ActiveCameraColorAttachment, m_DecalLayersTexture, m_ActiveCameraDepthAttachment);}elserenderOpaqueForwardPass = m_RenderOpaqueForwardPass;renderOpaqueForwardPass.ConfigureColorStoreAction(opaquePassColorStoreAction);renderOpaqueForwardPass.ConfigureDepthStoreAction(opaquePassDepthStoreAction);// If there is any custom render pass renders to opaque pass' target before opaque pass,// we can't clear color as it contains the valid rendering output.bool hasPassesBeforeOpaque = activeRenderPassQueue.Find(x => (x.renderPassEvent <= RenderPassEvent.BeforeRenderingOpaques) && !x.overrideCameraTarget) != null;ClearFlag opaqueForwardPassClearFlag = (hasPassesBeforeOpaque || cameraData.renderType != CameraRenderType.Base)? ClearFlag.None: ClearFlag.Color;
#if ENABLE_VR && ENABLE_XR_MODULE// workaround for DX11 and DX12 XR test failures.// XRTODO: investigate DX XR clear issues.if (SystemInfo.usesLoadStoreActions)
#endifrenderOpaqueForwardPass.ConfigureClear(opaqueForwardPassClearFlag, Color.black);EnqueuePass(renderOpaqueForwardPass);}if (camera.clearFlags == CameraClearFlags.Skybox && cameraData.renderType != CameraRenderType.Overlay){if (RenderSettings.skybox != null || (camera.TryGetComponent(out Skybox cameraSkybox) && cameraSkybox.material != null))EnqueuePass(m_DrawSkyboxPass);}// If a depth texture was created we necessarily need to copy it, otherwise we could have render it to a renderbuffer.// Also skip if Deferred+RenderPass as CameraDepthTexture is used and filled by the GBufferPass// however we might need the depth texture with Forward-only pass rendered to it, so enable the copy depth in that caseif (requiresDepthCopyPass && !(this.renderingModeActual == RenderingMode.Deferred && useRenderPassEnabled && !renderPassInputs.requiresDepthTexture)){m_CopyDepthPass.Setup(m_ActiveCameraDepthAttachment, m_DepthTexture);EnqueuePass(m_CopyDepthPass);}// Set the depth texture to the far Z if we do not have a depth prepass or copy depth// Don't do this for Overlay cameras to not lose depth data in between cameras (as Base is guaranteed to be first)if (cameraData.renderType == CameraRenderType.Base && !requiresDepthPrepass && !requiresDepthCopyPass)Shader.SetGlobalTexture("_CameraDepthTexture", SystemInfo.usesReversedZBuffer ? Texture2D.blackTexture : Texture2D.whiteTexture);if (copyColorPass){// TODO: Downsampling method should be stored in the renderer instead of in the asset.// We need to migrate this data to renderer. For now, we query the method in the active asset.Downsampling downsamplingMethod = UniversalRenderPipeline.asset.opaqueDownsampling;var descriptor = cameraTargetDescriptor;CopyColorPass.ConfigureDescriptor(downsamplingMethod, ref descriptor, out var filterMode);RenderingUtils.ReAllocateIfNeeded(ref m_OpaqueColor, descriptor, filterMode, TextureWrapMode.Clamp, name: "_CameraOpaqueTexture");m_CopyColorPass.Setup(m_ActiveCameraColorAttachment, m_OpaqueColor, downsamplingMethod);EnqueuePass(m_CopyColorPass);}// Motion vectorsif (renderPassInputs.requiresMotionVectors){var colorDesc = cameraTargetDescriptor;colorDesc.graphicsFormat = MotionVectorRenderPass.k_TargetFormat;colorDesc.depthBufferBits = (int)DepthBits.None;colorDesc.msaaSamples = 1;  // Disable MSAA, consider a pixel resolve for half left velocity and half right velocity --> no velocity, which is untrue.RenderingUtils.ReAllocateIfNeeded(ref m_MotionVectorColor, colorDesc, FilterMode.Point, TextureWrapMode.Clamp, name: "_MotionVectorTexture");var depthDescriptor = cameraTargetDescriptor;depthDescriptor.graphicsFormat = GraphicsFormat.None;depthDescriptor.msaaSamples = 1;RenderingUtils.ReAllocateIfNeeded(ref m_MotionVectorDepth, depthDescriptor, FilterMode.Point, TextureWrapMode.Clamp, name: "_MotionVectorDepthTexture");m_MotionVectorPass.Setup(m_MotionVectorColor, m_MotionVectorDepth);EnqueuePass(m_MotionVectorPass);}#if ADAPTIVE_PERFORMANCE_2_1_0_OR_NEWERif (needTransparencyPass)
#endif{if (transparentsNeedSettingsPass){EnqueuePass(m_TransparentSettingsPass);}// if this is not lastCameraInTheStack we still need to Store, since the MSAA buffer might be needed by the Overlay camerasRenderBufferStoreAction transparentPassColorStoreAction = cameraTargetDescriptor.msaaSamples > 1 && lastCameraInTheStack ? RenderBufferStoreAction.Resolve : RenderBufferStoreAction.Store;RenderBufferStoreAction transparentPassDepthStoreAction = lastCameraInTheStack ? RenderBufferStoreAction.DontCare : RenderBufferStoreAction.Store;// If CopyDepthPass pass event is scheduled on or after AfterRenderingTransparent, we will need to store the depth buffer or resolve (store for now until latest trunk has depth resolve support) it for MSAA caseif (requiresDepthCopyPass && m_CopyDepthPass.renderPassEvent >= RenderPassEvent.AfterRenderingTransparents){transparentPassDepthStoreAction = RenderBufferStoreAction.Store;// handle depth resolve on platforms supporting itif (cameraTargetDescriptor.msaaSamples > 1 && RenderingUtils.MultisampleDepthResolveSupported())transparentPassDepthStoreAction = RenderBufferStoreAction.Resolve;}m_RenderTransparentForwardPass.ConfigureColorStoreAction(transparentPassColorStoreAction);m_RenderTransparentForwardPass.ConfigureDepthStoreAction(transparentPassDepthStoreAction);EnqueuePass(m_RenderTransparentForwardPass);}EnqueuePass(m_OnRenderObjectCallbackPass);bool shouldRenderUI = cameraData.rendersOverlayUI;bool outputToHDR = cameraData.isHDROutputActive;if (shouldRenderUI && outputToHDR){m_DrawOffscreenUIPass.Setup(ref cameraData, k_DepthBufferBits);EnqueuePass(m_DrawOffscreenUIPass);}bool hasCaptureActions = renderingData.cameraData.captureActions != null && lastCameraInTheStack;// When FXAA or scaling is active, we must perform an additional pass at the end of the frame for the following reasons:// 1. FXAA expects to be the last shader running on the image before it's presented to the screen. Since users are allowed//    to add additional render passes after post processing occurs, we can't run FXAA until all of those passes complete as well.//    The FinalPost pass is guaranteed to execute after user authored passes so FXAA is always run inside of it.// 2. UberPost can only handle upscaling with linear filtering. All other filtering methods require the FinalPost pass.// 3. TAA sharpening using standalone RCAS pass is required. (When upscaling is not enabled).bool applyFinalPostProcessing = anyPostProcessing && lastCameraInTheStack &&((renderingData.cameraData.antialiasing == AntialiasingMode.FastApproximateAntialiasing) ||((renderingData.cameraData.imageScalingMode == ImageScalingMode.Upscaling) && (renderingData.cameraData.upscalingFilter != ImageUpscalingFilter.Linear)) ||(renderingData.cameraData.IsTemporalAAEnabled() && renderingData.cameraData.taaSettings.contrastAdaptiveSharpening > 0.0f));// When post-processing is enabled we can use the stack to resolve rendering to camera target (screen or RT).// However when there are render passes executing after post we avoid resolving to screen so rendering continues (before sRGBConversion etc)bool resolvePostProcessingToCameraTarget = !hasCaptureActions && !hasPassesAfterPostProcessing && !applyFinalPostProcessing;bool needsColorEncoding = DebugHandler == null || !DebugHandler.HDRDebugViewIsActive(ref cameraData);//处理后处理渲染没有组件临时RT释放if (applyPostProcessing){if (clearInPostProcess){m_PostProcessPasses.m_AfterPostProcessColor?.Release();postProcessPass?.Dispose();}else{var desc = PostProcessPass.GetCompatibleDescriptor(cameraTargetDescriptor, cameraTargetDescriptor.width, cameraTargetDescriptor.height, cameraTargetDescriptor.graphicsFormat, DepthBits.None);RenderingUtils.ReAllocateIfNeeded(ref m_PostProcessPasses.m_AfterPostProcessColor, desc, FilterMode.Point, TextureWrapMode.Clamp, name: "_AfterPostProcessTexture");}}else{m_PostProcessPasses.m_AfterPostProcessColor?.Release();postProcessPass?.Dispose();}if (lastCameraInTheStack){SetupFinalPassDebug(ref cameraData);// Post-processing will resolve to final target. No need for final blit pass.if (applyPostProcessing){// if resolving to screen we need to be able to perform sRGBConversion in post-processing if necessarybool doSRGBEncoding = resolvePostProcessingToCameraTarget && needsColorEncoding;postProcessPass.Setup(cameraTargetDescriptor, m_ActiveCameraColorAttachment, resolvePostProcessingToCameraTarget, m_ActiveCameraDepthAttachment, colorGradingLut, m_MotionVectorColor, applyFinalPostProcessing, doSRGBEncoding);EnqueuePass(postProcessPass);}var sourceForFinalPass = m_ActiveCameraColorAttachment;// Do FXAA or any other final post-processing effect that might need to run after AA.if (applyFinalPostProcessing){finalPostProcessPass.SetupFinalPass(sourceForFinalPass, true, needsColorEncoding);EnqueuePass(finalPostProcessPass);}if (renderingData.cameraData.captureActions != null){EnqueuePass(m_CapturePass);}// if post-processing then we already resolved to camera target while doing post.// Also only do final blit if camera is not rendering to RT.bool cameraTargetResolved =// final PP always blit to camera targetapplyFinalPostProcessing ||// no final PP but we have PP stack. In that case it blit unless there are render pass after PP(applyPostProcessing && !hasPassesAfterPostProcessing && !hasCaptureActions) ||// offscreen camera rendering to a texture, we don't need a blit pass to resolve to screenm_ActiveCameraColorAttachment.nameID == m_XRTargetHandleAlias.nameID;// We need final blit to resolve to screenif (!cameraTargetResolved){m_FinalBlitPass.Setup(cameraTargetDescriptor, sourceForFinalPass);EnqueuePass(m_FinalBlitPass);}if (shouldRenderUI && !outputToHDR){EnqueuePass(m_DrawOverlayUIPass);}#if ENABLE_VR && ENABLE_XR_MODULEif (cameraData.xr.enabled){// active depth is depth target, we don't need a blit pass to resolvebool depthTargetResolved = m_ActiveCameraDepthAttachment.nameID == cameraData.xr.renderTarget;if (!depthTargetResolved && cameraData.xr.copyDepth){m_XRCopyDepthPass.Setup(m_ActiveCameraDepthAttachment, m_XRTargetHandleAlias);m_XRCopyDepthPass.CopyToDepth = true;EnqueuePass(m_XRCopyDepthPass);}}
#endif}// stay in RT so we resume rendering on stack after post-processingelse if (applyPostProcessing){postProcessPass.Setup(cameraTargetDescriptor, m_ActiveCameraColorAttachment, false, m_ActiveCameraDepthAttachment, colorGradingLut, m_MotionVectorColor, false, false);EnqueuePass(postProcessPass);}#if UNITY_EDITORif (isSceneViewOrPreviewCamera || (isGizmosEnabled && lastCameraInTheStack)){// Scene view camera should always resolve target (not stacked)m_FinalDepthCopyPass.Setup(m_DepthTexture, k_CameraTarget);m_FinalDepthCopyPass.CopyToDepth = true;m_FinalDepthCopyPass.MssaSamples = 0;// Turning off unnecessary NRP in Editor because of MSAA mistmatch between CameraTargetDescriptor vs camera backbuffer// NRP layer considers this being a pass with MSAA samples by checking CameraTargetDescriptor taken from RP asset// while the camera backbuffer has a single samplem_FinalDepthCopyPass.useNativeRenderPass = false;EnqueuePass(m_FinalDepthCopyPass);}
#endif}/// <inheritdoc />public override void SetupLights(ScriptableRenderContext context, ref RenderingData renderingData){m_ForwardLights.Setup(context, ref renderingData);if (this.renderingModeActual == RenderingMode.Deferred)m_DeferredLights.SetupLights(context, ref renderingData);}/// <inheritdoc />public override void SetupCullingParameters(ref ScriptableCullingParameters cullingParameters,ref CameraData cameraData){// TODO: PerObjectCulling also affect reflection probes. Enabling it for now.// if (asset.additionalLightsRenderingMode == LightRenderingMode.Disabled ||//     asset.maxAdditionalLightsCount == 0)if (renderingModeActual == RenderingMode.ForwardPlus){cullingParameters.cullingOptions |= CullingOptions.DisablePerObjectCulling;}// We disable shadow casters if both shadow casting modes are turned off// or the shadow distance has been turned down to zerobool isShadowCastingDisabled = !UniversalRenderPipeline.asset.supportsMainLightShadows && !UniversalRenderPipeline.asset.supportsAdditionalLightShadows;bool isShadowDistanceZero = Mathf.Approximately(cameraData.maxShadowDistance, 0.0f);if (isShadowCastingDisabled || isShadowDistanceZero){cullingParameters.cullingOptions &= ~CullingOptions.ShadowCasters;}if (this.renderingModeActual == RenderingMode.Deferred)cullingParameters.maximumVisibleLights = 0xFFFF;else if (this.renderingModeActual == RenderingMode.ForwardPlus){// We don't add one to the maximum light because mainlight is treated as any other light.cullingParameters.maximumVisibleLights = UniversalRenderPipeline.maxVisibleAdditionalLights;// Do not sort reflection probe from engine it will come in reverse order from what we need.cullingParameters.reflectionProbeSortingCriteria = ReflectionProbeSortingCriteria.None;}else{// We set the number of maximum visible lights allowed and we add one for the mainlight...//// Note: However ScriptableRenderContext.Cull() does not differentiate between light types.//       If there is no active main light in the scene, ScriptableRenderContext.Cull() might return  ( cullingParameters.maximumVisibleLights )  visible additional lights.//       i.e ScriptableRenderContext.Cull() might return  ( UniversalRenderPipeline.maxVisibleAdditionalLights + 1 )  visible additional lights !cullingParameters.maximumVisibleLights = UniversalRenderPipeline.maxVisibleAdditionalLights + 1;}cullingParameters.shadowDistance = cameraData.maxShadowDistance;cullingParameters.conservativeEnclosingSphere = UniversalRenderPipeline.asset.conservativeEnclosingSphere;cullingParameters.numIterationsEnclosingSphere = UniversalRenderPipeline.asset.numIterationsEnclosingSphere;}/// <inheritdoc />public override void FinishRendering(CommandBuffer cmd){m_ColorBufferSystem.Clear();m_ActiveCameraColorAttachment = null;m_ActiveCameraDepthAttachment = null;}void EnqueueDeferred(ref RenderingData renderingData, bool hasDepthPrepass, bool hasNormalPrepass, bool hasRenderingLayerPrepass, bool applyMainShadow, bool applyAdditionalShadow){m_DeferredLights.Setup(ref renderingData,applyAdditionalShadow ? m_AdditionalLightsShadowCasterPass : null,hasDepthPrepass,hasNormalPrepass,hasRenderingLayerPrepass,m_DepthTexture,m_ActiveCameraDepthAttachment,m_ActiveCameraColorAttachment);// Need to call Configure for both of these passes to setup input attachments as first frame otherwise will raise errorsif (useRenderPassEnabled && m_DeferredLights.UseRenderPass){m_GBufferPass.Configure(null, renderingData.cameraData.cameraTargetDescriptor);m_DeferredPass.Configure(null, renderingData.cameraData.cameraTargetDescriptor);}EnqueuePass(m_GBufferPass);//Must copy depth for deferred shading: TODO wait for API fix to bind depth texture as read-only resource.if (!useRenderPassEnabled || !m_DeferredLights.UseRenderPass){m_GBufferCopyDepthPass.Setup(m_CameraDepthAttachment, m_DepthTexture);EnqueuePass(m_GBufferCopyDepthPass);}EnqueuePass(m_DeferredPass);EnqueuePass(m_RenderOpaqueForwardOnlyPass);}private struct RenderPassInputSummary{internal bool requiresDepthTexture;internal bool requiresDepthPrepass;internal bool requiresNormalsTexture;internal bool requiresColorTexture;internal bool requiresColorTextureCreated;internal bool requiresMotionVectors;internal RenderPassEvent requiresDepthNormalAtEvent;internal RenderPassEvent requiresDepthTextureEarliestEvent;}private RenderPassInputSummary GetRenderPassInputs(ref RenderingData renderingData){RenderPassEvent beforeMainRenderingEvent = m_RenderingMode == RenderingMode.Deferred ? RenderPassEvent.BeforeRenderingGbuffer : RenderPassEvent.BeforeRenderingOpaques;RenderPassInputSummary inputSummary = new RenderPassInputSummary();inputSummary.requiresDepthNormalAtEvent = RenderPassEvent.BeforeRenderingOpaques;inputSummary.requiresDepthTextureEarliestEvent = RenderPassEvent.BeforeRenderingPostProcessing;for (int i = 0; i < activeRenderPassQueue.Count; ++i){ScriptableRenderPass pass = activeRenderPassQueue[i];bool needsDepth = (pass.input & ScriptableRenderPassInput.Depth) != ScriptableRenderPassInput.None;bool needsNormals = (pass.input & ScriptableRenderPassInput.Normal) != ScriptableRenderPassInput.None;bool needsColor = (pass.input & ScriptableRenderPassInput.Color) != ScriptableRenderPassInput.None;bool needsMotion = (pass.input & ScriptableRenderPassInput.Motion) != ScriptableRenderPassInput.None;bool eventBeforeMainRendering = pass.renderPassEvent <= beforeMainRenderingEvent;// TODO: Need a better way to handle this, probably worth to recheck after render graph// DBuffer requires color texture created as it does not handle y flip correctlyif (pass is DBufferRenderPass dBufferRenderPass){inputSummary.requiresColorTextureCreated = true;}inputSummary.requiresDepthTexture |= needsDepth;inputSummary.requiresDepthPrepass |= needsNormals || needsDepth && eventBeforeMainRendering;inputSummary.requiresNormalsTexture |= needsNormals;inputSummary.requiresColorTexture |= needsColor;inputSummary.requiresMotionVectors |= needsMotion;if (needsDepth)inputSummary.requiresDepthTextureEarliestEvent = (RenderPassEvent)Mathf.Min((int)pass.renderPassEvent, (int)inputSummary.requiresDepthTextureEarliestEvent);if (needsNormals || needsDepth)inputSummary.requiresDepthNormalAtEvent = (RenderPassEvent)Mathf.Min((int)pass.renderPassEvent, (int)inputSummary.requiresDepthNormalAtEvent);}// NOTE: TAA and motion vector dependencies added here to share between Execute and Render (Graph) paths.// TAA in postprocess requires motion to function.if (renderingData.cameraData.IsTemporalAAEnabled())inputSummary.requiresMotionVectors = true;// Motion vectors imply depthif (inputSummary.requiresMotionVectors){inputSummary.requiresDepthTexture = true;inputSummary.requiresDepthTextureEarliestEvent = (RenderPassEvent)Mathf.Min((int)m_MotionVectorPass.renderPassEvent, (int)inputSummary.requiresDepthTextureEarliestEvent);}return inputSummary;}void CreateCameraRenderTarget(ScriptableRenderContext context, ref RenderTextureDescriptor descriptor, bool primedDepth, CommandBuffer cmd, ref CameraData cameraData){using (new ProfilingScope(null, Profiling.createCameraRenderTarget)){if (m_ColorBufferSystem.PeekBackBuffer() == null || m_ColorBufferSystem.PeekBackBuffer().nameID != BuiltinRenderTextureType.CameraTarget){m_ActiveCameraColorAttachment = m_ColorBufferSystem.GetBackBuffer(cmd);ConfigureCameraColorTarget(m_ActiveCameraColorAttachment);cmd.SetGlobalTexture("_CameraColorTexture", m_ActiveCameraColorAttachment.nameID);//Set _AfterPostProcessTexture, users might still rely on this although it is now always the cameratarget due to swapbuffercmd.SetGlobalTexture("_AfterPostProcessTexture", m_ActiveCameraColorAttachment.nameID);}if (m_CameraDepthAttachment == null || m_CameraDepthAttachment.nameID != BuiltinRenderTextureType.CameraTarget){//清除未旧附件m_CameraDepthAttachment?.Release();var depthDescriptor = descriptor;depthDescriptor.useMipMap = false;depthDescriptor.autoGenerateMips = false;depthDescriptor.bindMS = false;bool hasMSAA = depthDescriptor.msaaSamples > 1 && (SystemInfo.supportsMultisampledTextures != 0);// if MSAA is enabled and we are not resolving depth, which we only do if the CopyDepthPass is AfterTransparents,// then we want to bind the multisampled surface.if (hasMSAA){// if depth priming is enabled the copy depth primed pass is meant to do the MSAA resolve, so we want to bind the MS surfaceif (IsDepthPrimingEnabled(ref cameraData))depthDescriptor.bindMS = true;elsedepthDescriptor.bindMS = !(RenderingUtils.MultisampleDepthResolveSupported() &&SystemInfo.supportsMultisampleAutoResolve &&m_CopyDepthMode == CopyDepthMode.AfterTransparents);}// binding MS surfaces is not supported by the GLES backend, and it won't be fixed after investigating// the high performance impact of potential fixes, which would make it more expensive than depth prepass (fogbugz 1339401 for more info)if (IsGLESDevice())depthDescriptor.bindMS = false;depthDescriptor.graphicsFormat = GraphicsFormat.None;depthDescriptor.depthStencilFormat = k_DepthStencilFormat;//设置CameraDepthAttachment精度if (!cameraData.camera.orthographic)depthDescriptor.depthBufferBits = k_DepthBufferBits;elsedepthDescriptor.depthBufferBits = 24;RenderingUtils.ReAllocateIfNeeded(ref m_CameraDepthAttachment, depthDescriptor, FilterMode.Point, TextureWrapMode.Clamp, name: "_CameraDepthAttachment");cmd.SetGlobalTexture(m_CameraDepthAttachment.name, m_CameraDepthAttachment.nameID);// update the descriptor to match the depth attachmentdescriptor.depthStencilFormat = depthDescriptor.depthStencilFormat;descriptor.depthBufferBits = depthDescriptor.depthBufferBits;}}context.ExecuteCommandBuffer(cmd);cmd.Clear();}bool PlatformRequiresExplicitMsaaResolve(){
#if UNITY_EDITOR// In the editor play-mode we use a Game View Render Texture, with// samples count forced to 1 so we always need to do an explicit MSAA resolve.return true;
#else// On Metal/iOS the MSAA resolve is done implicitly as part of the renderpass, so we do not need an extra intermediate pass for the explicit autoresolve.// Note: On Vulkan Standalone, despite SystemInfo.supportsMultisampleAutoResolve being true, the backbuffer has only 1 sample, so we still require// the explicit resolve on non-mobile platforms with supportsMultisampleAutoResolve.return !(SystemInfo.supportsMultisampleAutoResolve && Application.isMobilePlatform)&& SystemInfo.graphicsDeviceType != GraphicsDeviceType.Metal;
#endif}/// <summary>/// Checks if the pipeline needs to create a intermediate render texture./// </summary>/// <param name="cameraData">CameraData contains all relevant render target information for the camera.</param>/// <seealso cref="CameraData"/>/// <returns>Return true if pipeline needs to render to a intermediate render texture.</returns>bool RequiresIntermediateColorTexture(ref CameraData cameraData){// When rendering a camera stack we always create an intermediate render texture to composite camera results.// We create it upon rendering the Base camera.if (cameraData.renderType == CameraRenderType.Base && !cameraData.resolveFinalTarget)return true;// Always force rendering into intermediate color texture if deferred rendering mode is selected.// Reason: without intermediate color texture, the target camera texture is y-flipped.// However, the target camera texture is bound during gbuffer pass and deferred pass.// Gbuffer pass will not be y-flipped because it is MRT (see ScriptableRenderContext implementation),// while deferred pass will be y-flipped, which breaks rendering.// This incurs an extra blit into at the end of rendering.if (this.renderingModeActual == RenderingMode.Deferred)return true;bool isSceneViewCamera = cameraData.isSceneViewCamera;var cameraTargetDescriptor = cameraData.cameraTargetDescriptor;int msaaSamples = cameraTargetDescriptor.msaaSamples;bool isScaledRender = cameraData.imageScalingMode != ImageScalingMode.None;bool isCompatibleBackbufferTextureDimension = cameraTargetDescriptor.dimension == TextureDimension.Tex2D;bool requiresExplicitMsaaResolve = msaaSamples > 1 && PlatformRequiresExplicitMsaaResolve();bool isOffscreenRender = cameraData.targetTexture != null && !isSceneViewCamera;bool isCapturing = cameraData.captureActions != null;#if ENABLE_VR && ENABLE_XR_MODULEif (cameraData.xr.enabled){isScaledRender = false;isCompatibleBackbufferTextureDimension = cameraData.xr.renderTargetDesc.dimension == cameraTargetDescriptor.dimension;}
#endifbool postProcessEnabled = cameraData.postProcessEnabled && m_PostProcessPasses.isCreated;bool requiresBlitForOffscreenCamera = postProcessEnabled || cameraData.requiresOpaqueTexture || requiresExplicitMsaaResolve || !cameraData.isDefaultViewport;if (isOffscreenRender)return requiresBlitForOffscreenCamera;return requiresBlitForOffscreenCamera || isSceneViewCamera || isScaledRender || cameraData.isHdrEnabled ||!isCompatibleBackbufferTextureDimension || isCapturing || cameraData.requireSrgbConversion;}bool CanCopyDepth(ref CameraData cameraData){bool msaaEnabledForCamera = cameraData.cameraTargetDescriptor.msaaSamples > 1;bool supportsTextureCopy = SystemInfo.copyTextureSupport != CopyTextureSupport.None;bool supportsDepthTarget = RenderingUtils.SupportsRenderTextureFormat(RenderTextureFormat.Depth);bool supportsDepthCopy = !msaaEnabledForCamera && (supportsDepthTarget || supportsTextureCopy);bool msaaDepthResolve = msaaEnabledForCamera && SystemInfo.supportsMultisampledTextures != 0;// copying MSAA depth on GLES3 is giving invalid results. This won't be fixed by the driver team because it would introduce performance issues (more info in the Fogbugz issue 1339401 comments)if (IsGLESDevice() && msaaDepthResolve)return false;return supportsDepthCopy || msaaDepthResolve;}internal override void SwapColorBuffer(CommandBuffer cmd){m_ColorBufferSystem.Swap();//Check if we are using the depth that is attached to color bufferif (m_ActiveCameraDepthAttachment.nameID != BuiltinRenderTextureType.CameraTarget)ConfigureCameraTarget(m_ColorBufferSystem.GetBackBuffer(cmd), m_ActiveCameraDepthAttachment);elseConfigureCameraColorTarget(m_ColorBufferSystem.GetBackBuffer(cmd));m_ActiveCameraColorAttachment = m_ColorBufferSystem.GetBackBuffer(cmd);cmd.SetGlobalTexture("_CameraColorTexture", m_ActiveCameraColorAttachment.nameID);//Set _AfterPostProcessTexture, users might still rely on this although it is now always the cameratarget due to swapbuffercmd.SetGlobalTexture("_AfterPostProcessTexture", m_ActiveCameraColorAttachment.nameID);}internal override RTHandle GetCameraColorFrontBuffer(CommandBuffer cmd){return m_ColorBufferSystem.GetFrontBuffer(cmd);}internal override RTHandle GetCameraColorBackBuffer(CommandBuffer cmd){return m_ColorBufferSystem.GetBackBuffer(cmd);}internal override void EnableSwapBufferMSAA(bool enable){m_ColorBufferSystem.EnableMSAA(enable);}}
}

相关文章:

Unity URP RenderTexture优化(二):深度图优化

目录 前言&#xff1a; 一、定位深度信息 1.1&#xff1a;k_DepthStencilFormat 1.2&#xff1a;k_DepthBufferBits 1.3&#xff1a;_CameraDepthTexture精度与大小 1.4&#xff1a;_CameraDepthAttachment数量 二、全代码 前言&#xff1a; 在上一篇文章&#xff1a;Un…...

iview表单提交验证时,出现空值参数被过滤掉不提交的问题解决

如图所示 有时候在表单提交的时候 个别参数是空值&#xff0c;但是看提交接口的反馈 发现空值的参数根本没传 这是因为表单验证给过滤掉了空值&#xff0c;有时候如果空值传不传都不无所谓&#xff0c;那可以不用管&#xff0c;但如果就算是空值也得传的吗&#xff0c;那就需要…...

GEO vs SEO:从搜索引擎到生成引擎的优化新思路

随着人工智能技术的快速发展&#xff0c;生成引擎优化&#xff08;GEO&#xff09;作为一种新兴的优化策略&#xff0c;逐渐成为企业和内容创作者关注的焦点。与传统的搜索引擎优化&#xff08;SEO&#xff09;相比&#xff0c;GEO不仅关注如何提升内容在搜索结果中的排名&…...

Python-pandas-操作csv文件(读取数据/写入数据)及csv语法详细分享

Python-pandas-操作csv文件(读取数据/写入数据) 提示&#xff1a;帮帮志会陆续更新非常多的IT技术知识&#xff0c;希望分享的内容对您有用。本章分享的是pandas的使用语法。前后每一小节的内容是存在的有&#xff1a;学习and理解的关联性。【帮帮志系列文章】&#xff1a;每个…...

如何在Windows上实现MacOS中的open命令

在MacOS的终端中&#xff0c;想要快速便捷的在Finder中打开当前目录&#xff0c;直接使用oepn即可。 open . 但是Windows中没有直接提供类似open这样的命令&#xff0c;既然没有直接提供&#xff0c;我们就间接手搓一个实现它。 步骤1&#xff1a;创建open.bat echo OFF expl…...

读论文笔记-LLaVA:Visual Instruction Tuning

读论文笔记-LLaVA&#xff1a;Visual Instruction Tuning 《Visual Instruction Tuning》 研究机构&#xff1a;Microsoft Research 发表于2023的NeurIPS Problems 填补指令微调方法&#xff08;包括数据、模型、基准等&#xff09;在多模态领域的空白。 Motivations 人工…...

Vue3源码学习3-结合vitetest来实现mini-vue

文章目录 前言✅ 当前已实现模块汇总&#xff08;mini-vue&#xff09;✅ 每个模块简要源码摘要1. reactive.ts2. effect.ts3. computed.ts4. ref.ts5. toRef.ts6. toRefs.ts ✅ 下一阶段推荐目标所有核心模块对应的 __tests__ 测试文件&#xff0c;**带完整注释**✅ reactive.…...

K8S - 从零构建 Docker 镜像与容器

一、基础概念 1.1 镜像&#xff08;Image&#xff09; “软件的标准化安装包” &#xff0c;包含代码、环境和配置的只读模板。 技术解析 镜像由多个层组成&#xff0c;每层对应一个Dockerfile指令&#xff1a; 应用代码 → 运行时环境 → 系统工具链 → 启动配置核心特性…...

贪心算法求解边界最大数

贪心算法求解边界最大数&#xff08;拼多多2504、排列问题&#xff09; 多多有两个仅由正整数构成的数列 s1 和 s2&#xff0c;多多可以对 s1 进行任意次操作&#xff0c;每次操作可以置换 s1 中任意两个数字的位置。多多想让数列 s1 构成的数字尽可能大&#xff0c;但是不能比…...

C++类和对象(中)

类的默认成员函数 默认成员函数就是用户没有显式实现&#xff0c;编译器会自动生成的成员函数。一个类&#xff0c;我们不写的情况下编译器会默认生成6个默认成员函数&#xff0c;C11以后还会增加两个默认成员函数&#xff0c;移动构造和移动赋值。默认成员函数 很重要&#x…...

(Go Gin)Gin学习笔记(五)会话控制与参数验证:Cookie使用、Sessions使用、结构体验证参数、自定义验证参数

1. Cookie介绍 HTTP是无状态协议&#xff0c;服务器不能记录浏览器的访问状态&#xff0c;也就是说服务器不能区分两次请求是否由同一个客户端发出Cookie就是解决HTTP协议无状态的方案之一&#xff0c;中文是小甜饼的意思Cookie实际上就是服务器保存在浏览器上的一段信息。浏览…...

Windows 10 环境二进制方式安装 MySQL 8.0.41

文章目录 初始化数据库配置文件注册成服务启停服务链接服务器登录之后重置密码卸载 初始化数据库 D:\MySQL\MySQL8.0.41\mysql-8.0.41-winx64\mysql-8.0.41-winx64\bin\mysqld -I --console --basedirD:\MySQL\MySQL8.0.41\mysql-8.0.41-winx64\mysql-8.0.41-winx64 --datadi…...

Day.js一个2k轻量级的时间日期处理库

dayjs介绍 dayjs是一个极简快速2kB的JavaScript库&#xff0c;可以为浏览器处理解析、验证、操作和显示日期和时间&#xff0c;它的设计目标是提供一个简单、快速且功能强大的日期处理工具&#xff0c;同时保持极小的体积&#xff08;仅 2KB 左右&#xff09;。 Day.js 的 API…...

SQL实战:05之间隔连续数问题求解

概述 最近刷题时遇到一些比较有意思的题目&#xff0c;之前多次遇到一些求解连续数的问题&#xff0c;这次遇到了他们的变种&#xff0c;连续数可以间隔指定的数也视为是一个完整的“连续”。针对连续数的这类问题我们之前讲的可以利用等差数列的思想来解决&#xff0c;然而现…...

Windows下Dify安装及使用

Dify安装及使用 Dify 是开源的 LLM 应用开发平台。提供从 Agent 构建到 AI workflow 编排、RAG 检索、模型管理等能力&#xff0c;轻松构建和运营生成式 AI 原生应用。比 LangChain 更易用。 前置条件 windows下安装了docker环境-Windows11安装Docker-CSDN博客 下载 Git下载…...

回归分析丨基于R语言复杂数据回归与混合效应模型【多水平/分层/嵌套】技术与代码

回归分析是科学研究特别是生态学领域科学研究和数据分析十分重要的统计工具&#xff0c;可以回答众多科学问题&#xff0c;如环境因素对物种、种群、群落及生态系统或气候变化的影响&#xff1b;物种属性和系统发育对物种分布&#xff08;多度&#xff09;的影响等。纵观涉及数…...

EasyRTC嵌入式音视频实时通话SDK技术,打造低延迟、高安全的远程技术支持

一、背景 在当今数字化时代&#xff0c;远程技术支持已成为解决各类技术问题的关键手段。随着企业业务的拓展和技术的日益复杂&#xff0c;快速、高效地解决远程设备与系统的技术难题变得至关重要。EasyRTC作为一款高性能的实时通信解决方案&#xff0c;为远程技术支持提供了创…...

webrtc ICE 打洞总结

要搞清webrtc ICE连接是否能成功 &#xff0c; 主要是搞懂NAT NAT 类型 简单来说 一 是本地的ip和端口 决定外部的 ip和端口(和目的Ip和端口无关) &#xff0c; &#xff08;这种情况又分为 &#xff0c; 无限制&#xff0c;仅限制 ip &#xff0c; 限制ip和port , 也就是…...

AI开发者的Docker实践:汉化(中文),更换镜像源,Dockerfile,部署Python项目

AI开发者的Docker实践&#xff1a;汉化&#xff08;中文&#xff09;&#xff0c;更换镜像源&#xff0c;Dockerfile&#xff0c;部署Python项目 Dcoker官网1、核心概念镜像 (Image)容器 (Container)仓库 (Repository)DockerfileDocker Compose 2、Docker 的核心组件Docker 引擎…...

4.30阅读

一. 原文阅读 Passage 7&#xff08;推荐阅读时间&#xff1a;6 - 7分钟&#xff09; In department stores and closets all over the world, they are waiting. Their outward appearance seems rather appealing because they come in a variety of styles, textures, and …...

区块链:跨链协的技术突破与产业重构

引言&#xff1a;区块链的“孤岛困境”与跨链的使命 区块链技术自诞生以来&#xff0c;凭借去中心化、透明性和安全性重塑了金融、供应链、身份认证等领域。然而&#xff0c;不同区块链平台间的​​互操作性缺失​​&#xff0c;如同“数据与价值的孤岛”&#xff0c;严重限制…...

Github 热点项目 Qwen3 通义千问全面发布 新一代智能语言模型系统

阿里云Qwen3模型真是黑科技&#xff01;两大模式超贴心——深度思考能解高数题&#xff0c;快速应答秒回日常梗。支持百种语言互译&#xff0c;跨国客服用它沟通零障碍&#xff01;打工人福音是内置API工具&#xff0c;查天气做报表张口就来。字&#xff09; 1Qwen3 今日星标 …...

有状态服务与无状态服务:差异、特点及应用场景全解

有状态服务和无状态服务是在分布式系统和网络编程中常提到的概念&#xff0c;下面为你详细介绍&#xff1a; 一、无状态服务 无状态服务指的是该服务的单次请求处理不依赖之前的请求信息&#xff0c;每个请求都是独立的。服务端不会存储客户端的上下文信息&#xff0c;每次请…...

【网络入侵检测】基于源码分析Suricata的引擎日志配置解析

【作者主页】只道当时是寻常 【专栏介绍】Suricata入侵检测。专注网络、主机安全&#xff0c;欢迎关注与评论。 1. 概要 &#x1f44b; Suricata 的引擎日志记录系统主要记录该引擎在启动、运行以及关闭期间应用程序的相关信息&#xff0c;如错误信息和其他诊断信息&#xff0c…...

Attention层的FLOPs计算

前置知识 设矩阵 A 的维度为 mn&#xff0c;矩阵 B 的维度为 np&#xff0c;则它们相乘后得到矩阵 C 的维度为 mp。其中&#xff0c;C 中每个元素的计算需要进行 n 次乘法和 n−1 次加法。也就是说&#xff0c;总的浮点运算次数&#xff08;FLOPs&#xff09;约为 m p (2n) …...

支付APP如何做好网络安全防护

支付APP的网络安全防护需要从技术、管理、用户行为等多层面综合施策&#xff0c;以下为核心措施&#xff1a; ​​一、技术防御&#xff1a;构建安全底层​​ ​​数据加密​​ ​​传输加密​​&#xff1a;使用最新协议&#xff08;如TLS 1.3&#xff09;对交易数据加密&…...

Missashe考研日记-day31

Missashe考研日记-day31 0 写在前面 芜湖&#xff0c;五一前最后一天学习圆满结束&#xff0c;又到了最喜欢的放假环节&#xff0c;回来再努力了。 1 专业课408 学习时间&#xff1a;2h学习内容&#xff1a; OK啊&#xff0c;今天把文件系统前两节的内容全部学完了&#xf…...

二叉树的路径总和问题(递归遍历,回溯算法)

112. 路径总和 - 力扣&#xff08;LeetCode&#xff09; class Solution { private: bool traversal(TreeNode*cur,int count){if(!cur->left&&!cur->right&&count0){return true;}if(!cur->left&&!cur->right){return false;}if(cur-…...

Java学习计划与资源推荐(入门到进阶、高阶、实战)

🤟致敬读者 🟩感谢阅读🟦笑口常开🟪生日快乐⬛早点睡觉📘博主相关 🟧博主信息🟨博客首页🟫专栏推荐🟥活动信息文章目录 Java学习计划与资源推荐**一、筑基阶段(2-3个月)****二、进阶开发阶段(2个月)****三、高级突破阶段(2-3个月)****四、项目实战与竞…...

动态规划 -- 子数组问题

本篇文章中主要讲解动态规划系列中的几个经典的子数组问题。 1 最大子数组和 53. 最大子数组和 - 力扣&#xff08;LeetCode&#xff09; 解析题目&#xff1a; 子数组是一个数组中的连续部分&#xff0c;也就是说&#xff0c;如果一个数组以 nums[i]结尾&#xff0c;那么有两…...

Sehll编程的函数于数组

目录 一、函数 1.1、定义函数 1.2、查看、删除函数 1.3、函数的返回值 1.4、函数的参数传递 1.5、函数的作用范围 1.6、函数递归 二、数组 2.1、声明数组 2.2、数组格式定义 2.3、数组调用 2.4、删除数组 一、函数 shell编程中&#xff0c;函数用于封装一段可以重…...

flutter 专题 六十四 在原生项目中集成Flutter

概述 使用Flutter从零开始开发App是一件轻松惬意的事情&#xff0c;但对于一些成熟的产品来说&#xff0c;完全摒弃原有App的历史沉淀&#xff0c;全面转向Flutter是不现实的。因此使用Flutter去统一Android、iOS技术栈&#xff0c;把它作为已有原生App的扩展能力&#xff0c;…...

AI生成Flutter UI代码实践(一)

之前的杂谈中有提到目前的一些主流AI编程工具&#xff0c;比如Cursor&#xff0c;Copilot&#xff0c;Trea等。因为我是Android 开发&#xff0c;日常使用Android Studio&#xff0c;所以日常使用最多的还是Copilot&#xff0c;毕竟Github月月送我会员&#xff0c;白嫖还是挺香…...

spring boot中@Validated

在 Spring Boot 中&#xff0c;Validated 是用于触发参数校验的注解&#xff0c;通常与 ​​JSR-303/JSR-380​​&#xff08;Bean Validation&#xff09;提供的校验注解一起使用。以下是常见的校验注解及其用法&#xff1a; ​1. 基本校验注解​​ 这些注解可以直接用于字段…...

VBA代码解决方案第二十四讲:EXCEL中,如何删除重复数据行

《VBA代码解决方案》(版权10028096)这套教程是我最早推出的教程&#xff0c;目前已经是第三版修订了。这套教程定位于入门后的提高&#xff0c;在学习这套教程过程中&#xff0c;侧重点是要理解及掌握我的“积木编程”思想。要灵活运用教程中的实例像搭积木一样把自己喜欢的代码…...

SpringBoot+EasyExcel+Mybatis+H2实现导入

文章目录 SpringBootEasyExcelMybatisH2实现导入1.准备工作1.1 依赖管理1.2 配置信息properties1.3 H2数据库1.4 Spring Boot 基础概念1.5 Mybatis核心概念 1.6 EasyExcel核心概念 2.生成Excel数据工具类-随机字符串编写生成Excel的java文件 3.导入功能并且存入数据库3.1 返回结…...

算法四 习题 1.3

数组实现栈 #include <iostream> #include <vector> #include <stdexcept> using namespace std;class MyStack { private:vector<int> data; // 用于存储栈元素的数组public:// 构造函数MyStack() {}// 入栈操作void push(int val) {data.push_back…...

el-tabs与table样式冲突导致高度失效问题解决(vue2+elementui)

背景 正常的el-table能根据父容器自动计算剩余高度&#xff0c;并会在列表中判断自适应去放出滚动条。而el-tabs本身就是自适应el-tab-pane内容的高度来进行自适应调节&#xff0c;这样就会导致el-table计算不了当前剩余的高度&#xff0c;所以当el-tabs里面包含el-table时&am…...

Access开发:轻松一键将 Access 全库表格导出为 Excel

hi&#xff0c;大家好呀&#xff01; 在日常工作中&#xff0c;Access 常常是我们忠实的数据管家&#xff0c;默默守护着项目信息、客户列表或是库存记录。它结构清晰&#xff0c;录入便捷&#xff0c;对于许多中小型应用场景来说&#xff0c;无疑是个得力助手。然而&#xff…...

合并多个Excel文件到一个文件,并保留格式

合并多个Excel文件到一个文件&#xff0c;并保留格式 需求介绍第一步&#xff1a;创建目标文件第二步&#xff1a;创建任务列表第三步&#xff1a;合并文件第四步&#xff1a;处理合并后的文件之调用程序打开并保存一次之前生成的Excel文件第五步&#xff1a;处理合并后的文件之…...

使用ZYNQ芯片和LVGL框架实现用户高刷新UI设计系列教程(第十讲)

这一期我们讲解demo中登录、ok按键的回调函数以及界面的美化&#xff0c;以下是上期界面的图片如图所示&#xff1a; 首先点击界面在右侧的工具栏中调配颜色渐变色&#xff0c;具体设置如下图所示&#xff1a; 然后是关于界面内框也就是容器的美化&#xff0c;具体如下图所示…...

论文笔记(八十二)Transformers without Normalization

Transformers without Normalization 文章概括Abstract1 引言2 背景&#xff1a;归一化层3 归一化层做什么&#xff1f;4 动态 Tanh &#xff08;Dynamic Tanh (DyT)&#xff09;5 实验6 分析6.1 DyT \text{DyT} DyT 的效率6.2 tanh \text{tanh} tanh 和 α α α 的消融实验…...

Mysql之数据库基础

&#x1f31f; 各位看官好&#xff0c;我是maomi_9526&#xff01; &#x1f30d; 种一棵树最好是十年前&#xff0c;其次是现在&#xff01; &#x1f680; 今天来学习Mysql的相关知识。 &#x1f44d; 如果觉得这篇文章有帮助&#xff0c;欢迎您一键三连&#xff0c;分享给更…...

shell(5)

位置参数变量 1.介绍 当我们执行一个shell脚本时,如果希望获取到命令行的参数信息,就可以使用到位置参数变量. 比如&#xff1a;./myshell.sh100 200,这就是一个执行shell的命令行,可以在myshell脚本中获取到参数信息. 2.基本语法 $n(功能描述&#xff1a;n为数字,$0代表命令…...

VARIAN安捷伦真空泵维修清洁保养操作SOP换油操作流程内部转子图文并茂内部培训手侧

VARIAN安捷伦真空泵维修清洁保养操作SOP换油操作流程内部转子图文并茂内部培训手侧...

动画震动效果

项目场景&#xff1a; 提示&#xff1a;这里简述项目相关背景&#xff1a; 在有的相关目中特别是在C端一般都要求做的炫酷一些&#xff0c;这就需要一些简易的动画效果&#xff0c;这里就弄了一个简易的震动的效果如下视频所示 让图标一大一小的震动视频 分析&#xff1a; 提…...

DB-GPT V0.7.1 版本更新:支持多模态模型、支持 Qwen3 系列,GLM4 系列模型 、支持Oracle数据库等

V0.7.1版本主要新增、增强了以下核心特性 &#x1f340;DB-GPT支持多模态模型。 &#x1f340;DB-GPT支持 Qwen3 系列&#xff0c;GLM4 系列模型。 &#x1f340; MCP支持 SSE 权限认证和 SSL/TLS 安全通信。 &#x1f340; 支持Oracle数据库。 &#x1f340; 支持 Infini…...

C++23 std::invoke_r:调用可调用 (Callable) 对象 (P2136R3)

文章目录 引言背景知识回顾可调用对象C17的std::invoke std::invoke_r的诞生提案背景std::invoke_r的定义参数和返回值异常说明 std::invoke_r的使用场景指定返回类型丢弃返回值 std::invoke_r与std::invoke的对比功能差异使用场景差异 结论 引言 在C的发展历程中&#xff0c;…...

pymysql

参数&#xff08;会导致SQL注入&#xff09; import pymysql# 创建数据库连接 conn pymysql.connect(user "root",password "root",host "127.0.0.1",port 3306,database "test" )# 创建游标对象 cur conn.cursor(cursorpymysql.…...

基于Spring Boot + Vue 项目中引入deepseek方法

准备工作 在开始调用 DeepSeek API 之前&#xff0c;你需要完成以下准备工作&#xff1a; 1.访问 DeepSeek 官网&#xff0c;注册一个账号。 2.获取 API 密钥&#xff1a;登录 DeepSeek 平台&#xff0c;进入 API 管理 页面。创建一个新的 API 密钥&#xff08;API Key&#x…...