Merge pull request #1602 from billhollings/vk-ext-metal-objects

Merge branch master into vk-ext-metal-objects and update to use latest VK_EXT_metal_objects API
diff --git a/Docs/MoltenVK_Runtime_UserGuide.md b/Docs/MoltenVK_Runtime_UserGuide.md
index b8564de..d17615b 100644
--- a/Docs/MoltenVK_Runtime_UserGuide.md
+++ b/Docs/MoltenVK_Runtime_UserGuide.md
@@ -270,6 +270,7 @@
 - `VK_KHR_device_group`
 - `VK_KHR_device_group_creation`
 - `VK_KHR_driver_properties`
+- `VK_KHR_dynamic_rendering`
 - `VK_KHR_get_memory_requirements2`
 - `VK_KHR_get_physical_device_properties2`
 - `VK_KHR_get_surface_capabilities2`
@@ -284,6 +285,7 @@
 - `VK_KHR_relaxed_block_layout`
 - `VK_KHR_sampler_mirror_clamp_to_edge` *(requires a Mac GPU or Apple family 7 GPU)*
 - `VK_KHR_sampler_ycbcr_conversion`
+- `VK_KHR_separate_depth_stencil_layouts`
 - `VK_KHR_shader_draw_parameters`
 - `VK_KHR_shader_float16_int8`
 - `VK_KHR_shader_subgroup_extended_types` *(requires Metal 2.1 on Mac or Metal 2.2 and Apple family 4 on iOS)*
@@ -308,7 +310,9 @@
 - `VK_EXT_post_depth_coverage` *(iOS and macOS, requires family 4 (A11) or better Apple GPU)*
 - `VK_EXT_private_data `
 - `VK_EXT_robustness2`
+- `VK_EXT_sample_locations`
 - `VK_EXT_scalar_block_layout`
+- `VK_EXT_separate_stencil_usage`
 - `VK_EXT_shader_stencil_export` *(requires Mac GPU family 2 or iOS GPU family 5)*
 - `VK_EXT_shader_viewport_index_layer`
 - `VK_EXT_subgroup_size_control` *(requires Metal 2.1 on Mac or Metal 2.2 and Apple family 4 on iOS)*
@@ -332,6 +336,15 @@
 *Vulkan* rendering surface. You can enable the `VK_EXT_metal_surface` extension by defining the `VK_USE_PLATFORM_METAL_EXT` guard macro in your compiler build settings. See the description of 
 the `mvk_vulkan.h` file below for  a convenient way to enable this extension automatically.
 
+Because **MoltenVK** supports the `VK_KHR_portability_subset` extension, when using the 
+*Vulkan Loader* from the *Vulkan SDK* to run **MoltenVK** on *macOS*, the *Vulkan Loader* 
+will only include **MoltenVK** `VkPhysicalDevices` in the list returned by 
+`vkEnumeratePhysicalDevices()` if the `VK_INSTANCE_CREATE_ENUMERATE_PORTABILITY_BIT_KHR` 
+flag is enabled in `vkCreateInstance()`. See the description of the `VK_KHR_portability_enumeration` 
+extension in the *Vulkan* specification for more information about the use of the 
+`VK_INSTANCE_CREATE_ENUMERATE_PORTABILITY_BIT_KHR` flag.
+
+
 
 <a name="moltenvk_extension"></a>
 ### MoltenVK `VK_MVK_moltenvk` Extension
diff --git a/Docs/Whats_New.md b/Docs/Whats_New.md
index 80c235a..5c05fbd 100644
--- a/Docs/Whats_New.md
+++ b/Docs/Whats_New.md
@@ -13,29 +13,65 @@
 
 
 
-MoltenVK 1.1.9
+MoltenVK 1.1.10
 --------------
 
 Released TBD
 
-- Fixes to pipeline layout compatibility.
+- Add support for extensions:
+	- `VK_KHR_portability_enumeration` support added to `MoltenVK_icd.json`, and documentation
+	  updated to indicate the impact of the `VK_KHR_portability_enumeration` extension during 
+	  runtime loading on *macOS* via the *Vulkan Loader*.
+	- `VK_KHR_dynamic_rendering`
+	- `VK_KHR_separate_depth_stencil_layouts`
+	- `VK_EXT_separate_stencil_usage`
+- Support attachment clearing when some clearing formats are not specified.
+- Fix error where previously bound push constants can override a descriptor buffer binding 
+  used by a subsequent pipeline that does not use push constants.
+- Fix error on some Apple GPU's where a `vkCmdTimestampQuery()` after a renderpass was 
+  writing timestamp before renderpass activity was complete.
+- Fix regression error in vertex buffer binding counts when establishing implicit buffers binding indexes.
+- Work around zombie memory bug in Intel Iris Plus Graphics driver when repeatedly retrieving GPU counter sets.
+- Update to latest SPIRV-Cross:
+	- MSL: Emit interface block members of array length 1 as arrays instead of scalars.
+
+
+
+MoltenVK 1.1.9
+--------------
+
+Released 2022/04/11
+
+- Add support for extensions:
+	- `VK_EXT_sample_locations` _(Custom locations settable via_ `vkCmdBeginRenderPass()` _only, 
+	   since_ `VkPhysicalDeviceSampleLocationsPropertiesEXT::variableSampleLocations` _is `false`)_.
+- Fixes to pipeline layout compatibility between sequentially bound pipelines.
 - Reinstate memory barriers on non-Apple GPUs, which were inadvertently disabled in an earlier update.
 - Support base vertex instance support in shader conversion.
 - Fix alignment between outputs and inputs between shader stages when using nested structures.
 - Fix issue where the depth component of a stencil-only renderpass attachment was incorrectly attempting to be stored.
 - Fix deletion of GPU counter `MTLFence` while it is being used by `MTLCommandBuffer`.
+- Fix crash in `vkGetMTLCommandQueueMVK()`.
+- Fix leak of `CoreFoundation` objects during calls to `vkUseIOSurfaceMVK()`.
 - Remove limit on `VkPhysicalDeviceLimits::maxSamplerAllocationCount` when not using Metal argument buffers.
 - Avoid adjusting SRGB clear color values by half-ULP on GPUs that round float clear colors down.
 - Fixes to optimize resource objects retained by descriptors beyond their lifetimes.
+- Optimize behavior for `VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT` when 
+  `MVK_CONFIG_PREFILL_METAL_COMMAND_BUFFERS` is used
 - `MoltenVKShaderConverter` tool defaults to the highest MSL version supported on runtime OS.
 - Update *glslang* version, to use `python3` in *glslang* scripts, to replace missing `python` on *macOS 12.3*.
+- Update `VK_MVK_MOLTENVK_SPEC_VERSION` to version `34`.
 - Update to latest SPIRV-Cross:
 	- MSL: Support input/output blocks containing nested struct arrays.
 	- MSL: Use var name instead of var-type name for flattened interface members.
 	- MSL: Handle aliased variable names for resources placed in IB struct.
+	- MSL: Handle awkward mix and match of `Offset` / `ArrayStride` in constants.
 	- MSL: Append entry point args to local variable names to avoid conflicts.
-	- MSL: Consider that gl_IsHelperInvocation can be Volatile.
+	- MSL: Consider that `gl_IsHelperInvocation` can be `Volatile`.
 	- MSL: Refactor and fix use of quadgroup vs simdgroup.
+	- Handle `OpTerminateInvocation`.
+	- Fixup names of anonymous inner structs.
+	- Fix regression from adding 64-bit switch support.
 
 
 
diff --git a/ExternalRevisions/SPIRV-Cross_repo_revision b/ExternalRevisions/SPIRV-Cross_repo_revision
index 78a1017..57af954 100644
--- a/ExternalRevisions/SPIRV-Cross_repo_revision
+++ b/ExternalRevisions/SPIRV-Cross_repo_revision
@@ -1 +1 @@
-0b51794f0142a3124f4e351cfc0616a48268ba97
+c52333b984c529f92f0c33e3a0ef01d1322c8a07
diff --git a/ExternalRevisions/Vulkan-Headers_repo_revision b/ExternalRevisions/Vulkan-Headers_repo_revision
index 3c99c89..30e1cdb 100644
--- a/ExternalRevisions/Vulkan-Headers_repo_revision
+++ b/ExternalRevisions/Vulkan-Headers_repo_revision
@@ -1 +1 @@
-1dace16d8044758d32736eb59802d171970e9448
+76f00ef6cbb1886eb1162d1fa39bee8b51e22ee8
diff --git a/ExternalRevisions/Vulkan-Tools_repo_revision b/ExternalRevisions/Vulkan-Tools_repo_revision
index 4eae18e..cbd21e4 100644
--- a/ExternalRevisions/Vulkan-Tools_repo_revision
+++ b/ExternalRevisions/Vulkan-Tools_repo_revision
@@ -1 +1 @@
-bb32aa13d4920261b5086219028ef329605d0126
+3903162ac4b01ed376bfa55a72ef7217a72c0b74
diff --git a/ExternalRevisions/glslang_repo_revision b/ExternalRevisions/glslang_repo_revision
index cf673c4..31aa78d 100644
--- a/ExternalRevisions/glslang_repo_revision
+++ b/ExternalRevisions/glslang_repo_revision
@@ -1 +1 @@
-90d4bd05cd77ef5782a6779a0fe3d084440dc80d
+9bb8cfffb0eed010e07132282c41d73064a7a609
diff --git a/MoltenVK/MoltenVK/API/vk_mvk_moltenvk.h b/MoltenVK/MoltenVK/API/vk_mvk_moltenvk.h
index 583aef9..858f060 100644
--- a/MoltenVK/MoltenVK/API/vk_mvk_moltenvk.h
+++ b/MoltenVK/MoltenVK/API/vk_mvk_moltenvk.h
@@ -50,12 +50,12 @@
  */
 #define MVK_VERSION_MAJOR   1
 #define MVK_VERSION_MINOR   1
-#define MVK_VERSION_PATCH   9
+#define MVK_VERSION_PATCH   10
 
 #define MVK_MAKE_VERSION(major, minor, patch)    (((major) * 10000) + ((minor) * 100) + (patch))
 #define MVK_VERSION     MVK_MAKE_VERSION(MVK_VERSION_MAJOR, MVK_VERSION_MINOR, MVK_VERSION_PATCH)
 
-#define VK_MVK_MOLTENVK_SPEC_VERSION            33
+#define VK_MVK_MOLTENVK_SPEC_VERSION            34
 #define VK_MVK_MOLTENVK_EXTENSION_NAME          "VK_MVK_moltenvk"
 
 /** Identifies the level of logging MoltenVK should be limited to outputting. */
@@ -786,7 +786,7 @@
 	 * command buffer submission, to a physically removed GPU. In the case where this error does
 	 * not impact the VkPhysicalDevice, Vulkan requires that the app destroy and re-create a new
 	 * VkDevice. However, not all apps (including CTS) respect that requirement, leading to what
-	 * might be a transient command submission failure causing an unexpected catastophic app failure.
+	 * might be a transient command submission failure causing an unexpected catastrophic app failure.
 	 *
 	 * If this setting is enabled, in the case of a VK_ERROR_DEVICE_LOST error that does NOT impact
 	 * the VkPhysicalDevice, MoltenVK will log the error, but will not mark the VkDevice as lost,
@@ -929,6 +929,7 @@
 	VkBool32 descriptorSetArgumentBuffers;			/**< If true, a Metal argument buffer can be assigned to a descriptor set, and used on any pipeline and pipeline stage. If false, a different Metal argument buffer must be used for each pipeline-stage/descriptor-set combination. */
 	MVKFloatRounding clearColorFloatRounding;		/**< Identifies the type of rounding Metal uses for MTLClearColor float to integer conversions. */
 	MVKCounterSamplingFlags counterSamplingPoints;	/**< Identifies the points where pipeline GPU counter sampling may occur. */
+	VkBool32 programmableSamplePositions;			/**< If true, programmable MSAA sample positions are supported. */
 } MVKPhysicalDeviceMetalFeatures;
 
 /** MoltenVK performance of a particular type of activity. */
@@ -992,7 +993,7 @@
 #pragma mark Function types
 
 typedef VkResult (VKAPI_PTR *PFN_vkGetMoltenVKConfigurationMVK)(VkInstance ignored, MVKConfiguration* pConfiguration, size_t* pConfigurationSize);
-typedef VkResult (VKAPI_PTR *PFN_vkSetMoltenVKConfigurationMVK)(VkInstance ignored, MVKConfiguration* pConfiguration, size_t* pConfigurationSize);
+typedef VkResult (VKAPI_PTR *PFN_vkSetMoltenVKConfigurationMVK)(VkInstance ignored, const MVKConfiguration* pConfiguration, size_t* pConfigurationSize);
 typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceMetalFeaturesMVK)(VkPhysicalDevice physicalDevice, MVKPhysicalDeviceMetalFeatures* pMetalFeatures, size_t* pMetalFeaturesSize);
 typedef VkResult (VKAPI_PTR *PFN_vkGetPerformanceStatisticsMVK)(VkDevice device, MVKPerformanceStatistics* pPerf, size_t* pPerfSize);
 typedef void (VKAPI_PTR *PFN_vkGetVersionStringsMVK)(char* pMoltenVersionStringBuffer, uint32_t moltenVersionStringBufferLength, char* pVulkanVersionStringBuffer, uint32_t vulkanVersionStringBufferLength);
diff --git a/MoltenVK/MoltenVK/Commands/MVKCmdDraw.mm b/MoltenVK/MoltenVK/Commands/MVKCmdDraw.mm
index 8a826fa..3f9a52e 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCmdDraw.mm
+++ b/MoltenVK/MoltenVK/Commands/MVKCmdDraw.mm
@@ -193,7 +193,7 @@
                 if (pipeline->needsVertexOutputBuffer()) {
                     [mtlTessCtlEncoder setBuffer: vtxOutBuff->_mtlBuffer
                                           offset: vtxOutBuff->_offset
-                                         atIndex: kMVKTessCtlInputBufferIndex];
+                                         atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessCtlInputBufferBinding)];
                 }
 				
 				NSUInteger sgSize = pipeline->getTessControlStageState().threadExecutionWidth;
@@ -221,16 +221,16 @@
                     if (pipeline->needsTessCtlOutputBuffer()) {
                         [cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcOutBuff->_mtlBuffer
                                                                 offset: tcOutBuff->_offset
-                                                               atIndex: kMVKTessEvalInputBufferIndex];
+                                                               atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalInputBufferBinding)];
                     }
                     if (pipeline->needsTessCtlPatchOutputBuffer()) {
                         [cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcPatchOutBuff->_mtlBuffer
                                                                 offset: tcPatchOutBuff->_offset
-                                                               atIndex: kMVKTessEvalPatchInputBufferIndex];
+                                                               atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalPatchInputBufferBinding)];
                     }
                     [cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcLevelBuff->_mtlBuffer
                                                             offset: tcLevelBuff->_offset
-                                                           atIndex: kMVKTessEvalLevelBufferIndex];
+                                                           atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalLevelBufferBinding)];
                     [cmdEncoder->_mtlRenderEncoder setTessellationFactorBuffer: tcLevelBuff->_mtlBuffer
                                                                         offset: tcLevelBuff->_offset
                                                                 instanceStride: 0];
@@ -395,7 +395,7 @@
                 if (pipeline->needsVertexOutputBuffer()) {
                     [mtlTessCtlEncoder setBuffer: vtxOutBuff->_mtlBuffer
                                           offset: vtxOutBuff->_offset
-                                         atIndex: kMVKTessCtlInputBufferIndex];
+                                         atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessCtlInputBufferBinding)];
                 }
 				// The vertex shader produced output in the correct order, so there's no need to use
 				// an index buffer here.
@@ -424,16 +424,16 @@
                     if (pipeline->needsTessCtlOutputBuffer()) {
                         [cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcOutBuff->_mtlBuffer
                                                                 offset: tcOutBuff->_offset
-                                                               atIndex: kMVKTessEvalInputBufferIndex];
+                                                               atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalInputBufferBinding)];
                     }
                     if (pipeline->needsTessCtlPatchOutputBuffer()) {
                         [cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcPatchOutBuff->_mtlBuffer
                                                                 offset: tcPatchOutBuff->_offset
-                                                               atIndex: kMVKTessEvalPatchInputBufferIndex];
+                                                               atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalPatchInputBufferBinding)];
                     }
                     [cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcLevelBuff->_mtlBuffer
                                                             offset: tcLevelBuff->_offset
-                                                           atIndex: kMVKTessEvalLevelBufferIndex];
+                                                           atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalLevelBufferBinding)];
                     [cmdEncoder->_mtlRenderEncoder setTessellationFactorBuffer: tcLevelBuff->_mtlBuffer
                                                                         offset: tcLevelBuff->_offset
                                                                 instanceStride: 0];
@@ -741,7 +741,7 @@
                     if (pipeline->needsVertexOutputBuffer()) {
                         [mtlTessCtlEncoder setBuffer: vtxOutBuff->_mtlBuffer
                                               offset: vtxOutBuff->_offset
-                                             atIndex: kMVKTessCtlInputBufferIndex];
+                                             atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessCtlInputBufferBinding)];
                     }
                     [mtlTessCtlEncoder dispatchThreadgroupsWithIndirectBuffer: mtlIndBuff
                                                          indirectBufferOffset: mtlIndBuffOfst
@@ -757,16 +757,16 @@
 							if (pipeline->needsTessCtlOutputBuffer()) {
 								[cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcOutBuff->_mtlBuffer
 																		offset: tcOutBuff->_offset
-																	   atIndex: kMVKTessEvalInputBufferIndex];
+																	   atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalInputBufferBinding)];
 							}
 							if (pipeline->needsTessCtlPatchOutputBuffer()) {
 								[cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcPatchOutBuff->_mtlBuffer
 																		offset: tcPatchOutBuff->_offset
-																	   atIndex: kMVKTessEvalPatchInputBufferIndex];
+																	   atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalPatchInputBufferBinding)];
 							}
 							[cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcLevelBuff->_mtlBuffer
 																	offset: tcLevelBuff->_offset
-																   atIndex: kMVKTessEvalLevelBufferIndex];
+																   atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalLevelBufferBinding)];
 							[cmdEncoder->_mtlRenderEncoder setTessellationFactorBuffer: tcLevelBuff->_mtlBuffer
 																				offset: tcLevelBuff->_offset
 																		instanceStride: 0];
@@ -1076,7 +1076,7 @@
                     if (pipeline->needsVertexOutputBuffer()) {
                         [mtlTessCtlEncoder setBuffer: vtxOutBuff->_mtlBuffer
                                               offset: vtxOutBuff->_offset
-                                             atIndex: kMVKTessCtlInputBufferIndex];
+                                             atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessCtlInputBufferBinding)];
                     }
                     [mtlTessCtlEncoder dispatchThreadgroupsWithIndirectBuffer: mtlIndBuff
                                                          indirectBufferOffset: mtlTempIndBuffOfst
@@ -1092,16 +1092,16 @@
 							if (pipeline->needsTessCtlOutputBuffer()) {
 								[cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcOutBuff->_mtlBuffer
 																		offset: tcOutBuff->_offset
-																	   atIndex: kMVKTessEvalInputBufferIndex];
+																	   atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalInputBufferBinding)];
 							}
 							if (pipeline->needsTessCtlPatchOutputBuffer()) {
 								[cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcPatchOutBuff->_mtlBuffer
 																		offset: tcPatchOutBuff->_offset
-																	   atIndex: kMVKTessEvalPatchInputBufferIndex];
+																	   atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalPatchInputBufferBinding)];
 							}
 							[cmdEncoder->_mtlRenderEncoder setVertexBuffer: tcLevelBuff->_mtlBuffer
 																	offset: tcLevelBuff->_offset
-																   atIndex: kMVKTessEvalLevelBufferIndex];
+																   atIndex: cmdEncoder->getDevice()->getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalLevelBufferBinding)];
 							[cmdEncoder->_mtlRenderEncoder setTessellationFactorBuffer: tcLevelBuff->_mtlBuffer
 																				offset: tcLevelBuff->_offset
 																		instanceStride: 0];
diff --git a/MoltenVK/MoltenVK/Commands/MVKCmdQueries.mm b/MoltenVK/MoltenVK/Commands/MVKCmdQueries.mm
index b19335f..e64493a 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCmdQueries.mm
+++ b/MoltenVK/MoltenVK/Commands/MVKCmdQueries.mm
@@ -85,6 +85,8 @@
 
 	_pipelineStage = pipelineStage;
 
+	cmdBuff->recordTimestampCommand();
+
 	return rslt;
 }
 
diff --git a/MoltenVK/MoltenVK/Commands/MVKCmdRenderPass.h b/MoltenVK/MoltenVK/Commands/MVKCmdRenderPass.h
index 15c40fc..9b3c14d 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCmdRenderPass.h
+++ b/MoltenVK/MoltenVK/Commands/MVKCmdRenderPass.h
@@ -46,6 +46,7 @@
 
 protected:
 
+	MVKSmallVector<MVKSmallVector<MTLSamplePosition>> _subpassSamplePositions;
 	MVKRenderPass* _renderPass;
 	MVKFramebuffer* _framebuffer;
 	VkRect2D _renderArea;
@@ -138,6 +139,75 @@
 
 
 #pragma mark -
+#pragma mark MVKCmdBeginRendering
+
+/**
+ * Vulkan command to begin rendering.
+ * Template class to balance vector pre-allocations between very common low counts and fewer larger counts.
+ */
+template <size_t N>
+class MVKCmdBeginRendering : public MVKCommand {
+
+public:
+	VkResult setContent(MVKCommandBuffer*
+						cmdBuff, const VkRenderingInfo* pRenderingInfo);
+
+	void encode(MVKCommandEncoder* cmdEncoder) override;
+
+
+protected:
+	MVKCommandTypePool<MVKCommand>* getTypePool(MVKCommandPool* cmdPool) override;
+
+	VkRenderingInfo _renderingInfo;
+	MVKSmallVector<VkRenderingAttachmentInfo, N> _colorAttachments;
+	VkRenderingAttachmentInfo _depthAttachment;
+	VkRenderingAttachmentInfo _stencilAttachment;
+};
+
+// Concrete template class implementations.
+typedef MVKCmdBeginRendering<1> MVKCmdBeginRendering1;
+typedef MVKCmdBeginRendering<2> MVKCmdBeginRendering2;
+typedef MVKCmdBeginRendering<4> MVKCmdBeginRendering4;
+typedef MVKCmdBeginRendering<8> MVKCmdBeginRenderingMulti;
+
+
+#pragma mark -
+#pragma mark MVKCmdEndRendering
+
+/** Vulkan command to end the current dynamic rendering. */
+class MVKCmdEndRendering : public MVKCommand {
+
+public:
+	VkResult setContent(MVKCommandBuffer* cmdBuff);
+
+	void encode(MVKCommandEncoder* cmdEncoder) override;
+
+protected:
+	MVKCommandTypePool<MVKCommand>* getTypePool(MVKCommandPool* cmdPool) override;
+
+};
+
+
+#pragma mark -
+#pragma mark MVKCmdSetSampleLocations
+
+/** Vulkan command to dynamically set custom sample locations. */
+class MVKCmdSetSampleLocations : public MVKCommand {
+
+public:
+	VkResult setContent(MVKCommandBuffer* cmdBuff,
+						const VkSampleLocationsInfoEXT* pSampleLocationsInfo);
+
+	void encode(MVKCommandEncoder* cmdEncoder) override;
+
+protected:
+	MVKCommandTypePool<MVKCommand>* getTypePool(MVKCommandPool* cmdPool) override;
+
+	MVKSmallVector<MTLSamplePosition, 8> _samplePositions;
+};
+
+
+#pragma mark -
 #pragma mark MVKCmdExecuteCommands
 
 /**
diff --git a/MoltenVK/MoltenVK/Commands/MVKCmdRenderPass.mm b/MoltenVK/MoltenVK/Commands/MVKCmdRenderPass.mm
index 56f0d49..967c905 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCmdRenderPass.mm
+++ b/MoltenVK/MoltenVK/Commands/MVKCmdRenderPass.mm
@@ -36,6 +36,30 @@
 	_renderPass = (MVKRenderPass*)pRenderPassBegin->renderPass;
 	_framebuffer = (MVKFramebuffer*)pRenderPassBegin->framebuffer;
 	_renderArea = pRenderPassBegin->renderArea;
+	_subpassSamplePositions.clear();
+
+	for (const auto* next = (VkBaseInStructure*)pRenderPassBegin->pNext; next; next = next->pNext) {
+		switch (next->sType) {
+			case VK_STRUCTURE_TYPE_RENDER_PASS_SAMPLE_LOCATIONS_BEGIN_INFO_EXT: {
+				// Build an array of arrays, one array of sample positions for each subpass index.
+				// For subpasses not included in VkRenderPassSampleLocationsBeginInfoEXT, the resulting array of samples will be empty.
+				_subpassSamplePositions.resize(_renderPass->getSubpassCount());
+				auto* pRPSampLocnsInfo = (VkRenderPassSampleLocationsBeginInfoEXT*)next;
+				for (uint32_t spSLIdx = 0; spSLIdx < pRPSampLocnsInfo->postSubpassSampleLocationsCount; spSLIdx++) {
+					auto& spsl = pRPSampLocnsInfo->pPostSubpassSampleLocations[spSLIdx];
+					uint32_t spIdx = spsl.subpassIndex;
+					auto& spSampPosns = _subpassSamplePositions[spIdx];
+					for (uint32_t slIdx = 0; slIdx < spsl.sampleLocationsInfo.sampleLocationsCount; slIdx++) {
+						auto& sl = spsl.sampleLocationsInfo.pSampleLocations[slIdx];
+						spSampPosns.push_back(MTLSamplePositionMake(sl.x, sl.y));
+					}
+				}
+				break;
+			}
+			default:
+				break;
+		}
+	}
 
 	return VK_SUCCESS;
 }
@@ -61,13 +85,23 @@
 template <size_t N_CV, size_t N_A>
 void MVKCmdBeginRenderPass<N_CV, N_A>::encode(MVKCommandEncoder* cmdEncoder) {
 //	MVKLogDebug("Encoding vkCmdBeginRenderPass(). Elapsed time: %.6f ms.", mvkGetElapsedMilliseconds());
+
+	// Convert the sample position array of arrays to an array of array-references,
+	// so that it can be passed to the command encoder.
+	size_t spSPCnt = _subpassSamplePositions.size();
+	MVKArrayRef<MTLSamplePosition> spSPRefs[spSPCnt];
+	for (uint32_t spSPIdx = 0; spSPIdx < spSPCnt; spSPIdx++) {
+		spSPRefs[spSPIdx] = _subpassSamplePositions[spSPIdx].contents();
+	}
+	
 	cmdEncoder->beginRenderpass(this,
 								_contents,
 								_renderPass,
 								_framebuffer,
 								_renderArea,
 								_clearValues.contents(),
-								_attachments.contents());
+								_attachments.contents(),
+								MVKArrayRef(spSPRefs, spSPCnt));
 }
 
 template class MVKCmdBeginRenderPass<1, 0>;
@@ -132,6 +166,72 @@
 
 
 #pragma mark -
+#pragma mark MVKCmdBeginRendering
+
+template <size_t N>
+VkResult MVKCmdBeginRendering<N>::setContent(MVKCommandBuffer* cmdBuff,
+											 const VkRenderingInfo* pRenderingInfo) {
+	_renderingInfo = *pRenderingInfo;
+
+	// Copy attachments content, redirect info pointers to copied content, and remove any stale pNext refs
+	_colorAttachments.assign(_renderingInfo.pColorAttachments,
+							 _renderingInfo.pColorAttachments + _renderingInfo.colorAttachmentCount);
+	_renderingInfo.pColorAttachments = _colorAttachments.data();
+	for (auto caAtt : _colorAttachments) { caAtt.pNext = nullptr; }
+
+	if (mvkSetOrClear(&_depthAttachment, _renderingInfo.pDepthAttachment)) {
+		_renderingInfo.pDepthAttachment = &_depthAttachment;
+	}
+	if (mvkSetOrClear(&_stencilAttachment, _renderingInfo.pStencilAttachment)) {
+		_renderingInfo.pStencilAttachment = &_stencilAttachment;
+	}
+
+	return VK_SUCCESS;
+}
+
+template <size_t N>
+void MVKCmdBeginRendering<N>::encode(MVKCommandEncoder* cmdEncoder) {
+	cmdEncoder->beginRendering(this, &_renderingInfo);
+}
+
+template class MVKCmdBeginRendering<1>;
+template class MVKCmdBeginRendering<2>;
+template class MVKCmdBeginRendering<4>;
+template class MVKCmdBeginRendering<8>;
+
+
+#pragma mark -
+#pragma mark MVKCmdEndRendering
+
+VkResult MVKCmdEndRendering::setContent(MVKCommandBuffer* cmdBuff) {
+	return VK_SUCCESS;
+}
+
+void MVKCmdEndRendering::encode(MVKCommandEncoder* cmdEncoder) {
+	cmdEncoder->endRendering();
+}
+
+
+#pragma mark -
+#pragma mark MVKCmdSetSampleLocations
+
+VkResult MVKCmdSetSampleLocations::setContent(MVKCommandBuffer* cmdBuff,
+											  const VkSampleLocationsInfoEXT* pSampleLocationsInfo) {
+
+	for (uint32_t slIdx = 0; slIdx < pSampleLocationsInfo->sampleLocationsCount; slIdx++) {
+		auto& sl = pSampleLocationsInfo->pSampleLocations[slIdx];
+		_samplePositions.push_back(MTLSamplePositionMake(sl.x, sl.y));
+	}
+
+	return VK_SUCCESS;
+}
+
+void MVKCmdSetSampleLocations::encode(MVKCommandEncoder* cmdEncoder) {
+	cmdEncoder->setDynamicSamplePositions(_samplePositions.contents());
+}
+
+
+#pragma mark -
 #pragma mark MVKCmdExecuteCommands
 
 template <size_t N>
diff --git a/MoltenVK/MoltenVK/Commands/MVKCommandBuffer.h b/MoltenVK/MoltenVK/Commands/MVKCommandBuffer.h
index 7a38e67..5502dc2 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCommandBuffer.h
+++ b/MoltenVK/MoltenVK/Commands/MVKCommandBuffer.h
@@ -52,6 +52,16 @@
 typedef struct MVKCommandEncodingContext {
 	NSUInteger mtlVisibilityResultOffset = 0;
 	const MVKMTLBufferAllocation* visibilityResultBuffer = nullptr;
+
+	MVKRenderPass* getRenderPass() { return _renderPass; }
+	MVKFramebuffer* getFramebuffer() { return _framebuffer; }
+	void setRenderingContext(MVKRenderPass* renderPass, MVKFramebuffer* framebuffer);
+	VkRenderingFlags getRenderingFlags() { return _renderPass ? _renderPass->getRenderingFlags() : 0; }
+	~MVKCommandEncodingContext();
+
+private:
+	MVKRenderPass* _renderPass = nullptr;
+	MVKFramebuffer* _framebuffer = nullptr;
 } MVKCommandEncodingContext;
 
 
@@ -108,6 +118,10 @@
 	/** Called when a MVKCmdExecuteCommands is added to this command buffer. */
 	void recordExecuteCommands(const MVKArrayRef<MVKCommandBuffer*> secondaryCommandBuffers);
 
+	/** Called when a timestamp command is added. */
+	void recordTimestampCommand();
+
+
 #pragma mark Tessellation constituent command management
 
 	/** Update the last recorded pipeline with tessellation shaders */
@@ -166,95 +180,34 @@
 	bool canPrefill();
 	void prefill();
 	void clearPrefilledMTLCommandBuffer();
-	void releaseCommands();
+    void releaseCommands(MVKCommand* command);
+	void releaseRecordedCommands();
+    void flushImmediateCmdEncoder();
 
 	MVKCommand* _head = nullptr;
 	MVKCommand* _tail = nullptr;
-	uint32_t _commandCount;
+	MVKSmallVector<VkFormat, kMVKDefaultAttachmentCount> _colorAttachmentFormats;
 	MVKCommandPool* _commandPool;
-	std::atomic_flag _isExecutingNonConcurrently;
 	VkCommandBufferInheritanceInfo _secondaryInheritanceInfo;
+	VkCommandBufferInheritanceRenderingInfo _inerhitanceRenderingInfo;
 	id<MTLCommandBuffer> _prefilledMTLCmdBuffer = nil;
+    MVKCommandEncodingContext* _immediateCmdEncodingContext = nullptr;
+    MVKCommandEncoder* _immediateCmdEncoder = nullptr;
+	uint32_t _commandCount;
+	std::atomic_flag _isExecutingNonConcurrently;
 	bool _isSecondary;
 	bool _doesContinueRenderPass;
 	bool _canAcceptCommands;
 	bool _isReusable;
 	bool _supportsConcurrentExecution;
 	bool _wasExecuted;
+	bool _hasStageCounterTimestampCommand;
 };
 
 
 #pragma mark -
 #pragma mark MVKCommandEncoder
 
-// The following commands can be issued both inside and outside a renderpass and their state must
-// span multiple MTLRenderCommandEncoders, to allow state to be set before a renderpass, and to
-// allow more than one MTLRenderCommandEncoder to be used for a single Vulkan renderpass or subpass.
-//
-// + vkCmdBindPipeline() : _graphicsPipelineState & _computePipelineState
-// + vkCmdBindDescriptorSets() : _graphicsResourcesState & _computeResourcesState
-// + vkCmdBindVertexBuffers() : _graphicsResourcesState
-// + vkCmdBindIndexBuffer() : _graphicsResourcesState
-// + vkCmdPushConstants() : _vertexPushConstants & _tessCtlPushConstants & _tessEvalPushConstants & _fragmentPushConstants & _computePushConstants
-// + vkCmdSetViewport() : _viewportState
-// + vkCmdSetDepthBias() : _depthBiasState
-// + vkCmdSetScissor() : _scissorState
-// + vkCmdSetStencilCompareMask() : _depthStencilState
-// + vkCmdSetStencilWriteMask() : _depthStencilState
-// + vkCmdSetStencilReference() : _stencilReferenceValueState
-// + vkCmdSetBlendConstants() : _blendColorState
-// + vkCmdBeginQuery() : _occlusionQueryState
-// + vkCmdEndQuery() : _occlusionQueryState
-// + vkCmdPipelineBarrier() : handled via textureBarrier and MTLBlitCommandEncoder
-// + vkCmdWriteTimestamp() : doesn't affect MTLCommandEncoders
-// + vkCmdExecuteCommands() : state managed by embedded commands
-// - vkCmdSetLineWidth() - unsupported by Metal
-// - vkCmdSetDepthBounds() - unsupported by Metal
-// - vkCmdWaitEvents() - unsupported by Metal
-
-// The above list of Vulkan commands covers the following corresponding MTLRenderCommandEncoder state:
-// + setBlendColorRed : _blendColorState
-// + setCullMode : _graphicsPipelineState
-// + setDepthBias : _depthBiasState
-// + setDepthClipMode : _graphicsPipelineState
-// + setDepthStencilState : _depthStencilState
-// + setFrontFacingWinding : _graphicsPipelineState
-// + setRenderPipelineState : _graphicsPipelineState
-// + setScissorRect : _scissorState
-// + setStencilFrontReferenceValue : _stencilReferenceValueState
-// + setStencilReferenceValue (unused) : _stencilReferenceValueState
-// + setTriangleFillMode : _graphicsPipelineState
-// + setViewport : _viewportState
-// + setVisibilityResultMode : _occlusionQueryState
-// + setVertexBuffer : _graphicsResourcesState & _vertexPushConstants & _tessEvalPushConstants
-// + setVertexBuffers (unused) : _graphicsResourcesState
-// + setVertexBytes : _vertexPushConstants & _tessEvalPushConstants
-// + setVertexBufferOffset (unused) : _graphicsResourcesState
-// + setVertexTexture : _graphicsResourcesState
-// + setVertexTextures (unused) : _graphicsResourcesState
-// + setVertexSamplerState : _graphicsResourcesState
-// + setVertexSamplerStates : (unused) : _graphicsResourcesState
-// + setFragmentBuffer : _graphicsResourcesState & _fragmentPushConstants
-// + setFragmentBuffers (unused) : _graphicsResourcesState
-// + setFragmentBytes : _fragmentPushConstants
-// + setFragmentBufferOffset (unused) : _graphicsResourcesState
-// + setFragmentTexture : _graphicsResourcesState
-// + setFragmentTextures (unused) : _graphicsResourcesState
-// + setFragmentSamplerState : _graphicsResourcesState
-// + setFragmentSamplerStates : (unused) : _graphicsResourcesState
-
-// The above list of Vulkan commands covers the following corresponding MTLComputeCommandEncoder state:
-// + setComputePipelineState : _computePipelineState & _graphicsPipelineState
-// + setBuffer : _computeResourcesState & _computePushConstants & _graphicsResourcesState & _tessCtlPushConstants
-// + setBuffers (unused) : _computeResourcesState & _graphicsResourcesState
-// + setBytes : _computePushConstants & _tessCtlPushConstants
-// + setBufferOffset (unused) : _computeResourcesState & _graphicsResourcesState
-// + setTexture : _computeResourcesState & _graphicsResourcesState
-// + setTextures (unused) : _computeResourcesState & _graphicsResourcesState
-// + setSamplerState : _computeResourcesState & _graphicsResourcesState
-// + setSamplerStates : (unused) : _computeResourcesState & _graphicsResourcesState
-
-
 /*** Holds a collection of active queries for each query pool. */
 typedef std::unordered_map<MVKQueryPool*, MVKSmallVector<uint32_t, kMVKDefaultQueryCount>> MVKActivatedQueries;
 
@@ -274,6 +227,10 @@
 
 	/** Encode commands from the command buffer onto the Metal command buffer. */
 	void encode(id<MTLCommandBuffer> mtlCmdBuff, MVKCommandEncodingContext* pEncodingContext);
+    
+    void beginEncoding(id<MTLCommandBuffer> mtlCmdBuff, MVKCommandEncodingContext* pEncodingContext);
+    void encodeCommands(MVKCommand* command);
+    void endEncoding();
 
 	/** Encode commands from the specified secondary command buffer onto the Metal command buffer. */
 	void encodeSecondary(MVKCommandBuffer* secondaryCmdBuffer);
@@ -283,9 +240,10 @@
 						 VkSubpassContents subpassContents,
 						 MVKRenderPass* renderPass,
 						 MVKFramebuffer* framebuffer,
-						 VkRect2D& renderArea,
+						 const VkRect2D& renderArea,
 						 MVKArrayRef<VkClearValue> clearValues,
-						 MVKArrayRef<MVKImageView*> attachments);
+						 MVKArrayRef<MVKImageView*> attachments,
+						 MVKArrayRef<MVKArrayRef<MTLSamplePosition>> subpassSamplePositions);
 
 	/** Begins the next render subpass. */
 	void beginNextSubpass(MVKCommand* subpassCmd, VkSubpassContents renderpassContents);
@@ -293,6 +251,12 @@
 	/** Begins the next multiview Metal render pass. */
 	void beginNextMultiviewPass();
 
+	/** Sets the dynamic custom sample positions to use when rendering. */
+	void setDynamicSamplePositions(MVKArrayRef<MTLSamplePosition> dynamicSamplePositions);
+
+	/** Begins dynamic rendering. */
+	void beginRendering(MVKCommand* rendCmd, const VkRenderingInfo* pRenderingInfo);
+
 	/** Begins a Metal render pass for the current render subpass. */
 	void beginMetalRenderPass(MVKCommandUse cmdUse);
 
@@ -300,7 +264,7 @@
 	void encodeStoreActions(bool storeOverride = false);
 
 	/** Returns whether or not we are presently in a render pass. */
-	bool isInRenderPass() { return _renderPass != nullptr; }
+	bool isInRenderPass() { return _pEncodingContext->getRenderPass() != nullptr; }
 
 	/** Returns the render subpass that is currently active. */
 	MVKRenderSubpass* getSubpass();
@@ -349,6 +313,9 @@
 	/** Ends the current renderpass. */
 	void endRenderpass();
 
+	/** Ends the current dymamic rendering. */
+	void endRendering();
+
 	/** 
 	 * Ends all encoding operations on the current Metal command encoder.
 	 *
@@ -499,36 +466,36 @@
     NSString* getMTLRenderCommandEncoderName(MVKCommandUse cmdUse);
 	void encodeGPUCounterSample(MVKGPUCounterQueryPool* mvkQryPool, uint32_t sampleIndex, MVKCounterSamplingFlags samplingPoints);
 	void encodeTimestampStageCounterSamples();
-	bool hasTimestampStageCounterQueries() { return !_timestampStageCounterQueries.empty(); }
 	id<MTLFence> getStageCountersMTLFence();
+	MVKArrayRef<MTLSamplePosition> getCustomSamplePositions();
 
 	typedef struct GPUCounterQuery {
 		MVKGPUCounterQueryPool* queryPool = nullptr;
 		uint32_t query = 0;
 	} GPUCounterQuery;
 
-	VkSubpassContents _subpassContents;
-	MVKRenderPass* _renderPass;
-	MVKFramebuffer* _framebuffer;
-	MVKCommand* _lastMultiviewPassCmd;
-	uint32_t _renderSubpassIndex;
-	uint32_t _multiviewPassIndex;
 	VkRect2D _renderArea;
+	MVKCommand* _lastMultiviewPassCmd;
     MVKActivatedQueries* _pActivatedQueries;
 	MVKSmallVector<GPUCounterQuery, 16> _timestampStageCounterQueries;
 	MVKSmallVector<VkClearValue, kMVKDefaultAttachmentCount> _clearValues;
 	MVKSmallVector<MVKImageView*, kMVKDefaultAttachmentCount> _attachments;
+	MVKSmallVector<MTLSamplePosition> _dynamicSamplePositions;
+	MVKSmallVector<MVKSmallVector<MTLSamplePosition>> _subpassSamplePositions;
 	id<MTLComputeCommandEncoder> _mtlComputeEncoder;
-	MVKCommandUse _mtlComputeEncoderUse;
 	id<MTLBlitCommandEncoder> _mtlBlitEncoder;
 	id<MTLFence> _stageCountersMTLFence;
-    MVKCommandUse _mtlBlitEncoderUse;
 	MVKPushConstantsCommandEncoderState _vertexPushConstants;
 	MVKPushConstantsCommandEncoderState _tessCtlPushConstants;
 	MVKPushConstantsCommandEncoderState _tessEvalPushConstants;
 	MVKPushConstantsCommandEncoderState _fragmentPushConstants;
 	MVKPushConstantsCommandEncoderState _computePushConstants;
     MVKOcclusionQueryCommandEncoderState _occlusionQueryState;
+	VkSubpassContents _subpassContents;
+	MVKCommandUse _mtlComputeEncoderUse;
+	MVKCommandUse _mtlBlitEncoderUse;
+	uint32_t _renderSubpassIndex;
+	uint32_t _multiviewPassIndex;
     uint32_t _flushCount = 0;
 	bool _isRenderingEntireAttachment;
 };
diff --git a/MoltenVK/MoltenVK/Commands/MVKCommandBuffer.mm b/MoltenVK/MoltenVK/Commands/MVKCommandBuffer.mm
index ac7124b..2c84d4a 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCommandBuffer.mm
+++ b/MoltenVK/MoltenVK/Commands/MVKCommandBuffer.mm
@@ -30,6 +30,32 @@
 
 using namespace std;
 
+#pragma mark -
+#pragma mark MVKCommandEncodingContext
+
+// Sets the rendering objects, releasing the old objects, and retaining the new objects.
+// Retaining the new is performed first, in case the old and new are the same object.
+// With dynamic rendering, the objects are transient and only live as long as the
+// duration of the active renderpass. To make it transient, it is released by the calling
+// code after it has been retained here, so that when it is released again here at the
+// end of the renderpass, it will automatically be destroyed. App-created objects are
+// not released by the calling code, and will not be destroyed by the release here.
+void MVKCommandEncodingContext::setRenderingContext(MVKRenderPass* renderPass, MVKFramebuffer* framebuffer) {
+
+	if (renderPass) { renderPass->retain(); }
+	if (_renderPass) { _renderPass->release(); }
+	_renderPass = renderPass;
+
+	if (framebuffer) { framebuffer->retain(); }
+	if (_framebuffer) { _framebuffer->release(); }
+	_framebuffer = framebuffer;
+}
+
+// Release rendering objects in case this instance is destroyed before ending the current renderpass.
+MVKCommandEncodingContext::~MVKCommandEncodingContext() {
+	setRenderingContext(nullptr, nullptr);
+}
+
 
 #pragma mark -
 #pragma mark MVKCommandBuffer
@@ -47,27 +73,72 @@
 
 	// If this is a secondary command buffer, and contains inheritance info, set the inheritance info and determine
 	// whether it contains render pass continuation info. Otherwise, clear the inheritance info, and ignore it.
-	const VkCommandBufferInheritanceInfo* pInheritInfo = (_isSecondary ? pBeginInfo->pInheritanceInfo : NULL);
+	// Also check for and set any dynamic rendering inheritance info. The color format array must be copied locally.
+	const VkCommandBufferInheritanceInfo* pInheritInfo = (_isSecondary ? pBeginInfo->pInheritanceInfo : nullptr);
 	bool hasInheritInfo = mvkSetOrClear(&_secondaryInheritanceInfo, pInheritInfo);
 	_doesContinueRenderPass = mvkAreAllFlagsEnabled(usage, VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT) && hasInheritInfo;
+	if (hasInheritInfo) {
+		for (const auto* next = (VkBaseInStructure*)_secondaryInheritanceInfo.pNext; next; next = next->pNext) {
+			switch (next->sType) {
+				case VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_RENDERING_INFO: {
+					if (mvkSetOrClear(&_inerhitanceRenderingInfo, (VkCommandBufferInheritanceRenderingInfo*)next)) {
+						for (uint32_t caIdx = 0; caIdx < _inerhitanceRenderingInfo.colorAttachmentCount; caIdx++) {
+							_colorAttachmentFormats.push_back(_inerhitanceRenderingInfo.pColorAttachmentFormats[caIdx]);
+						}
+						_inerhitanceRenderingInfo.pColorAttachmentFormats = _colorAttachmentFormats.data();
+					}
+					break;
+				}
+				default:
+					break;
+			}
+		}
+	}
 
-	return getConfigurationResult();
+    if(canPrefill()) {
+        @autoreleasepool {
+            uint32_t qIdx = 0;
+            _prefilledMTLCmdBuffer = _commandPool->newMTLCommandBuffer(qIdx);    // retain
+            
+            _immediateCmdEncodingContext = new MVKCommandEncodingContext;
+            
+            _immediateCmdEncoder = new MVKCommandEncoder(this);
+            _immediateCmdEncoder->beginEncoding(_prefilledMTLCmdBuffer, _immediateCmdEncodingContext);
+        }
+    }
+    
+    return getConfigurationResult();
 }
 
-void MVKCommandBuffer::releaseCommands() {
-	MVKCommand* cmd = _head;
-	while (cmd) {
-		MVKCommand* nextCmd = cmd->_next;	// Establish next before returning current to pool.
-		(cmd->getTypePool(getCommandPool()))->returnObject(cmd);
-		cmd = nextCmd;
-	}
+void MVKCommandBuffer::releaseCommands(MVKCommand* command) {
+    while(command) {
+        MVKCommand* nextCommand = command->_next; // Establish next before returning current to pool.
+        (command->getTypePool(getCommandPool()))->returnObject(command);
+        command = nextCommand;
+    }
+}
+
+void MVKCommandBuffer::releaseRecordedCommands() {
+    releaseCommands(_head);
 	_head = nullptr;
 	_tail = nullptr;
 }
 
+void MVKCommandBuffer::flushImmediateCmdEncoder() {
+    if(_immediateCmdEncoder) {
+        _immediateCmdEncoder->endEncoding();
+        delete _immediateCmdEncoder;
+        _immediateCmdEncoder = nullptr;
+        
+        delete _immediateCmdEncodingContext;
+        _immediateCmdEncodingContext = nullptr;
+    }
+}
+
 VkResult MVKCommandBuffer::reset(VkCommandBufferResetFlags flags) {
+    flushImmediateCmdEncoder();
 	clearPrefilledMTLCommandBuffer();
-	releaseCommands();
+	releaseRecordedCommands();
 	_doesContinueRenderPass = false;
 	_canAcceptCommands = false;
 	_isReusable = false;
@@ -76,6 +147,7 @@
 	_isExecutingNonConcurrently.clear();
 	_commandCount = 0;
 	_needsVisibilityResultMTLBuffer = false;
+	_hasStageCounterTimestampCommand = false;
 	_lastTessellationPipeline = nullptr;
 	_lastMultiviewSubpass = nullptr;
 	setConfigurationResult(VK_NOT_READY);
@@ -89,21 +161,33 @@
 
 VkResult MVKCommandBuffer::end() {
 	_canAcceptCommands = false;
-	prefill();
+    
+    flushImmediateCmdEncoder();
+    
 	return getConfigurationResult();
 }
 
 void MVKCommandBuffer::addCommand(MVKCommand* command) {
-	if ( !_canAcceptCommands ) {
-		setConfigurationResult(reportError(VK_NOT_READY, "Command buffer cannot accept commands before vkBeginCommandBuffer() is called."));
-		return;
-	}
+    if ( !_canAcceptCommands ) {
+        setConfigurationResult(reportError(VK_NOT_READY, "Command buffer cannot accept commands before vkBeginCommandBuffer() is called."));
+        return;
+    }
+
+	_commandCount++;
+
+    if(_immediateCmdEncoder) {
+        _immediateCmdEncoder->encodeCommands(command);
+        
+        if( !_isReusable ) {
+            releaseCommands(command);
+            return;
+        }
+    }
 
     if (_tail) { _tail->_next = command; }
     command->_next = nullptr;
     _tail = command;
     if ( !_head ) { _head = command; }
-    _commandCount++;
 }
 
 void MVKCommandBuffer::submit(MVKQueueCommandBufferSubmission* cmdBuffSubmit,
@@ -141,27 +225,6 @@
 	return true;
 }
 
-// If we can, prefill a MTLCommandBuffer with the commands in this command buffer.
-// Wrap in autorelease pool to capture autoreleased Metal encoding activity.
-void MVKCommandBuffer::prefill() {
-	@autoreleasepool {
-		clearPrefilledMTLCommandBuffer();
-
-		if ( !canPrefill() ) { return; }
-
-		uint32_t qIdx = 0;
-		_prefilledMTLCmdBuffer = _commandPool->newMTLCommandBuffer(qIdx);	// retain
-
-		MVKCommandEncodingContext encodingContext;
-		MVKCommandEncoder encoder(this);
-		encoder.encode(_prefilledMTLCmdBuffer, &encodingContext);
-
-		// Once encoded onto Metal, if this command buffer is not reusable, we don't need the
-		// MVKCommand instances anymore, so release them in order to reduce memory pressure.
-		if ( !_isReusable ) { releaseCommands(); }
-	}
-}
-
 bool MVKCommandBuffer::canPrefill() {
 	bool wantPrefill = _device->shouldPrefillMTLCommandBuffers();
 	return wantPrefill && !(_isSecondary || _supportsConcurrentExecution);
@@ -197,20 +260,21 @@
 	reset(0);
 }
 
-// If the initial visibility result buffer has not been set, promote the first visibility result buffer
-// found among any of the secondary command buffers, to support the case where a render pass is started in
-// the primary command buffer but the visibility query is started inside one of the secondary command buffers.
+// Promote the initial visibility buffer and indication of timestamp use from the secondary buffers.
 void MVKCommandBuffer::recordExecuteCommands(const MVKArrayRef<MVKCommandBuffer*> secondaryCommandBuffers) {
-	if (!_needsVisibilityResultMTLBuffer) {
-		for (MVKCommandBuffer* cmdBuff : secondaryCommandBuffers) {
-			if (cmdBuff->_needsVisibilityResultMTLBuffer) {
-				_needsVisibilityResultMTLBuffer = true;
-				break;
-			}
-		}
+	for (MVKCommandBuffer* cmdBuff : secondaryCommandBuffers) {
+		if (cmdBuff->_needsVisibilityResultMTLBuffer) { _needsVisibilityResultMTLBuffer = true; }
+		if (cmdBuff->_hasStageCounterTimestampCommand) { _hasStageCounterTimestampCommand = true; }
 	}
 }
 
+// Track whether a stage-based timestamp command has been added, so we know
+// to update the timestamp command fence when ending a Metal command encoder.
+void MVKCommandBuffer::recordTimestampCommand() {
+	_hasStageCounterTimestampCommand = mvkIsAnyFlagEnabled(_device->_pMetalFeatures->counterSamplingPoints, MVK_COUNTER_SAMPLING_AT_PIPELINE_STAGE);
+}
+
+
 #pragma mark -
 #pragma mark Tessellation constituent command management
 
@@ -251,33 +315,43 @@
 
 void MVKCommandEncoder::encode(id<MTLCommandBuffer> mtlCmdBuff,
 							   MVKCommandEncodingContext* pEncodingContext) {
-	_framebuffer = nullptr;
-	_renderPass = nullptr;
-	_subpassContents = VK_SUBPASS_CONTENTS_INLINE;
-	_renderSubpassIndex = 0;
-	_multiviewPassIndex = 0;
-	_canUseLayeredRendering = false;
+    beginEncoding(mtlCmdBuff, pEncodingContext);
+    encodeCommands(_cmdBuffer->_head);
+    endEncoding();
+}
 
+void MVKCommandEncoder::beginEncoding(id<MTLCommandBuffer> mtlCmdBuff, MVKCommandEncodingContext* pEncodingContext) {
 	_pEncodingContext = pEncodingContext;
-	_mtlCmdBuffer = mtlCmdBuff;		// not retained
 
-	setLabelIfNotNil(_mtlCmdBuffer, _cmdBuffer->_debugName);
+    _subpassContents = VK_SUBPASS_CONTENTS_INLINE;
+    _renderSubpassIndex = 0;
+    _multiviewPassIndex = 0;
+    _canUseLayeredRendering = false;
 
-	MVKCommand* cmd = _cmdBuffer->_head;
-	while (cmd) {
-		uint32_t prevMVPassIdx = _multiviewPassIndex;
-		cmd->encode(this);
-		if (_multiviewPassIndex > prevMVPassIdx) {
-			// This means we're in a multiview render pass, and we moved on to the
-			// next view group. Re-encode all commands in the subpass again for this group.
-			cmd = _lastMultiviewPassCmd->_next;
-		} else {
-			cmd = cmd->_next;
-		}
-	}
+    _mtlCmdBuffer = mtlCmdBuff;        // not retained
 
-	endCurrentMetalEncoding();
-	finishQueries();
+    setLabelIfNotNil(_mtlCmdBuffer, _cmdBuffer->_debugName);
+}
+
+void MVKCommandEncoder::encodeCommands(MVKCommand* command) {
+    while(command) {
+        uint32_t prevMVPassIdx = _multiviewPassIndex;
+        command->encode(this);
+
+        if(_multiviewPassIndex > prevMVPassIdx) {
+            // This means we're in a multiview render pass, and we moved on to the
+            // next view group. Re-encode all commands in the subpass again for this group.
+            
+            command = _lastMultiviewPassCmd->_next;
+        } else {
+            command = command->_next;
+        }
+    }
+}
+
+void MVKCommandEncoder::endEncoding() {
+    endCurrentMetalEncoding();
+    finishQueries();
 }
 
 void MVKCommandEncoder::encodeSecondary(MVKCommandBuffer* secondaryCmdBuffer) {
@@ -288,20 +362,71 @@
 	}
 }
 
+void MVKCommandEncoder::beginRendering(MVKCommand* rendCmd, const VkRenderingInfo* pRenderingInfo) {
+
+	VkSubpassContents contents = (mvkIsAnyFlagEnabled(pRenderingInfo->flags, VK_RENDERING_CONTENTS_SECONDARY_COMMAND_BUFFERS_BIT)
+								  ? VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS
+								  : VK_SUBPASS_CONTENTS_INLINE);
+
+	uint32_t maxAttCnt = (pRenderingInfo->colorAttachmentCount + 1) * 2;
+	MVKImageView* attachments[maxAttCnt];
+	VkClearValue clearValues[maxAttCnt];
+	uint32_t attCnt = mvkGetAttachments(pRenderingInfo, attachments, clearValues);
+
+	// If we're resuming a suspended renderpass, continue to use the existing renderpass
+	// (with updated rendering flags) and framebuffer. Otherwise, create new transient
+	// renderpass and framebuffer objects from the pRenderingInfo, and retain them until
+	// the renderpass is completely finished, which may span multiple command encoders.
+	MVKRenderPass* mvkRP;
+	MVKFramebuffer* mvkFB;
+	bool isResumingSuspended = (mvkIsAnyFlagEnabled(_pEncodingContext->getRenderingFlags(), VK_RENDERING_SUSPENDING_BIT) &&
+								mvkIsAnyFlagEnabled(pRenderingInfo->flags, VK_RENDERING_RESUMING_BIT));
+	if (isResumingSuspended) {
+		mvkRP = _pEncodingContext->getRenderPass();
+		mvkRP->setRenderingFlags(pRenderingInfo->flags);
+		mvkFB = _pEncodingContext->getFramebuffer();
+	} else {
+		mvkRP = mvkCreateRenderPass(getDevice(), pRenderingInfo);
+		mvkFB = mvkCreateFramebuffer(getDevice(), pRenderingInfo, mvkRP);
+	}
+	beginRenderpass(rendCmd, contents, mvkRP, mvkFB,
+					pRenderingInfo->renderArea,
+					MVKArrayRef(clearValues, attCnt),
+					MVKArrayRef(attachments, attCnt),
+					MVKArrayRef<MVKArrayRef<MTLSamplePosition>>());
+
+	// If we've just created new transient objects, once retained by this encoder,
+	// mark the objects as transient by releasing them from their initial creation
+	// retain, so they will be destroyed when released at the end of the renderpass,
+	// which may span multiple command encoders.
+	if ( !isResumingSuspended ) {
+		mvkRP->release();
+		mvkFB->release();
+	}
+}
+
 void MVKCommandEncoder::beginRenderpass(MVKCommand* passCmd,
 										VkSubpassContents subpassContents,
 										MVKRenderPass* renderPass,
 										MVKFramebuffer* framebuffer,
-										VkRect2D& renderArea,
+										const VkRect2D& renderArea,
 										MVKArrayRef<VkClearValue> clearValues,
-										MVKArrayRef<MVKImageView*> attachments) {
-	_renderPass = renderPass;
-	_framebuffer = framebuffer;
+										MVKArrayRef<MVKImageView*> attachments,
+										MVKArrayRef<MVKArrayRef<MTLSamplePosition>> subpassSamplePositions) {
+	_pEncodingContext->setRenderingContext(renderPass, framebuffer);
 	_renderArea = renderArea;
 	_isRenderingEntireAttachment = (mvkVkOffset2DsAreEqual(_renderArea.offset, {0,0}) &&
 									mvkVkExtent2DsAreEqual(_renderArea.extent, getFramebufferExtent()));
 	_clearValues.assign(clearValues.begin(), clearValues.end());
 	_attachments.assign(attachments.begin(), attachments.end());
+
+	// Copy the sample positions array of arrays, one array of sample positions for each subpass index.
+	_subpassSamplePositions.resize(subpassSamplePositions.size);
+	for (uint32_t spSPIdx = 0; spSPIdx < subpassSamplePositions.size; spSPIdx++) {
+		_subpassSamplePositions[spSPIdx].assign(subpassSamplePositions[spSPIdx].begin(),
+												subpassSamplePositions[spSPIdx].end());
+	}
+
 	setSubpass(passCmd, subpassContents, 0);
 }
 
@@ -337,6 +462,10 @@
 
 uint32_t MVKCommandEncoder::getMultiviewPassIndex() { return _multiviewPassIndex; }
 
+void MVKCommandEncoder::setDynamicSamplePositions(MVKArrayRef<MTLSamplePosition> dynamicSamplePositions) {
+	_dynamicSamplePositions.assign(dynamicSamplePositions.begin(), dynamicSamplePositions.end());
+}
+
 // Creates _mtlRenderEncoder and marks cached render state as dirty so it will be set into the _mtlRenderEncoder.
 void MVKCommandEncoder::beginMetalRenderPass(MVKCommandUse cmdUse) {
 
@@ -346,7 +475,7 @@
     MTLRenderPassDescriptor* mtlRPDesc = [MTLRenderPassDescriptor renderPassDescriptor];
 	getSubpass()->populateMTLRenderPassDescriptor(mtlRPDesc,
 												  _multiviewPassIndex,
-												  _framebuffer,
+												  _pEncodingContext->getFramebuffer(),
 												  _attachments.contents(),
 												  _clearValues.contents(),
 												  _isRenderingEntireAttachment,
@@ -388,6 +517,14 @@
         }
     }
 
+	// If programmable sample positions are supported, set them into the render pass descriptor.
+	// If no custom sample positions are established, size will be zero,
+	// and Metal will default to using default sample postions.
+	if (_pDeviceMetalFeatures->programmableSamplePositions) {
+		auto cstmSampPosns = getCustomSamplePositions();
+		[mtlRPDesc setSamplePositions: cstmSampPosns.data count: cstmSampPosns.size];
+	}
+
     _mtlRenderEncoder = [_mtlCmdBuffer renderCommandEncoderWithDescriptor: mtlRPDesc];     // not retained
 	setLabelIfNotNil(_mtlRenderEncoder, getMTLRenderCommandEncoderName(cmdUse));
 
@@ -411,6 +548,18 @@
     _occlusionQueryState.beginMetalRenderPass();
 }
 
+// If custom sample positions have been set, return them, otherwise return an empty array.
+// For Metal, VkPhysicalDeviceSampleLocationsPropertiesEXT::variableSampleLocations is false.
+// As such, Vulkan requires that sample positions must be established at the beginning of
+// a renderpass, and that both pipeline and dynamic sample locations must be the same as those
+// set for each subpass. Therefore, the only sample positions of use are those set for each
+// subpass when the renderpass begins. The pipeline and dynamic sample positions are ignored.
+MVKArrayRef<MTLSamplePosition> MVKCommandEncoder::getCustomSamplePositions() {
+	return (_renderSubpassIndex < _subpassSamplePositions.size()
+			? _subpassSamplePositions[_renderSubpassIndex].contents()
+			: MVKArrayRef<MTLSamplePosition>());
+}
+
 void MVKCommandEncoder::encodeStoreActions(bool storeOverride) {
 	getSubpass()->encodeStoreActions(this,
 									 _isRenderingEntireAttachment,
@@ -418,13 +567,13 @@
 									 storeOverride);
 }
 
-MVKRenderSubpass* MVKCommandEncoder::getSubpass() { return _renderPass->getSubpass(_renderSubpassIndex); }
+MVKRenderSubpass* MVKCommandEncoder::getSubpass() { return _pEncodingContext->getRenderPass()->getSubpass(_renderSubpassIndex); }
 
 // Returns a name for use as a MTLRenderCommandEncoder label
 NSString* MVKCommandEncoder::getMTLRenderCommandEncoderName(MVKCommandUse cmdUse) {
 	NSString* rpName;
 
-	rpName = _renderPass->getDebugName();
+	rpName = _pEncodingContext->getRenderPass()->getDebugName();
 	if (rpName) { return rpName; }
 
 	rpName = _cmdBuffer->getDebugName();
@@ -433,9 +582,15 @@
 	return mvkMTLRenderCommandEncoderLabel(cmdUse);
 }
 
-VkExtent2D MVKCommandEncoder::getFramebufferExtent() { return _framebuffer ? _framebuffer->getExtent2D() : VkExtent2D{0,0}; }
+VkExtent2D MVKCommandEncoder::getFramebufferExtent() {
+	auto* mvkFB = _pEncodingContext->getFramebuffer();
+	return mvkFB ? mvkFB->getExtent2D() : VkExtent2D{0,0};
+}
 
-uint32_t MVKCommandEncoder::getFramebufferLayerCount() { return _framebuffer ? _framebuffer->getLayerCount() : 0; }
+uint32_t MVKCommandEncoder::getFramebufferLayerCount() {
+	auto* mvkFB = _pEncodingContext->getFramebuffer();
+	return mvkFB ? mvkFB->getLayerCount() : 0;
+}
 
 void MVKCommandEncoder::bindPipeline(VkPipelineBindPoint pipelineBindPoint, MVKPipeline* pipeline) {
     switch (pipelineBindPoint) {
@@ -575,12 +730,16 @@
     _computePushConstants.encode();
 }
 
+void MVKCommandEncoder::endRendering() {
+	endRenderpass();
+}
+
 void MVKCommandEncoder::endRenderpass() {
 	encodeStoreActions();
 	endMetalRenderEncoding();
-
-	_renderPass = nullptr;
-	_framebuffer = nullptr;
+	if ( !mvkIsAnyFlagEnabled(_pEncodingContext->getRenderingFlags(), VK_RENDERING_SUSPENDING_BIT) ) {
+		_pEncodingContext->setRenderingContext(nullptr, nullptr);
+	}
 	_attachments.clear();
 	_renderSubpassIndex = 0;
 }
@@ -588,7 +747,7 @@
 void MVKCommandEncoder::endMetalRenderEncoding() {
     if (_mtlRenderEncoder == nil) { return; }
 
-	if (hasTimestampStageCounterQueries() ) { [_mtlRenderEncoder updateFence: getStageCountersMTLFence() afterStages: MTLRenderStageFragment]; }
+	if (_cmdBuffer->_hasStageCounterTimestampCommand) { [_mtlRenderEncoder updateFence: getStageCountersMTLFence() afterStages: MTLRenderStageFragment]; }
     [_mtlRenderEncoder endEncoding];
 	_mtlRenderEncoder = nil;    // not retained
 
@@ -616,12 +775,12 @@
 	_computeResourcesState.markDirty();
 	_computePushConstants.markDirty();
 
-	if (_mtlComputeEncoder && hasTimestampStageCounterQueries() ) { [_mtlComputeEncoder updateFence: getStageCountersMTLFence()]; }
+	if (_mtlComputeEncoder && _cmdBuffer->_hasStageCounterTimestampCommand) { [_mtlComputeEncoder updateFence: getStageCountersMTLFence()]; }
 	[_mtlComputeEncoder endEncoding];
 	_mtlComputeEncoder = nil;       // not retained
 	_mtlComputeEncoderUse = kMVKCommandUseNone;
 
-	if (_mtlBlitEncoder && hasTimestampStageCounterQueries() ) { [_mtlBlitEncoder updateFence: getStageCountersMTLFence()]; }
+	if (_mtlBlitEncoder && _cmdBuffer->_hasStageCounterTimestampCommand) { [_mtlBlitEncoder updateFence: getStageCountersMTLFence()]; }
 	[_mtlBlitEncoder endEncoding];
 	_mtlBlitEncoder = nil;          // not retained
     _mtlBlitEncoderUse = kMVKCommandUseNone;
@@ -753,7 +912,7 @@
 void MVKCommandEncoder::beginOcclusionQuery(MVKOcclusionQueryPool* pQueryPool, uint32_t query, VkQueryControlFlags flags) {
     _occlusionQueryState.beginOcclusionQuery(pQueryPool, query, flags);
     uint32_t queryCount = 1;
-    if (_renderPass && getSubpass()->isMultiview()) {
+    if (isInRenderPass() && getSubpass()->isMultiview()) {
         queryCount = getSubpass()->getViewCountInMetalPass(_multiviewPassIndex);
     }
     addActivatedQueries(pQueryPool, query, queryCount);
@@ -765,7 +924,7 @@
 
 void MVKCommandEncoder::markTimestamp(MVKTimestampQueryPool* pQueryPool, uint32_t query) {
     uint32_t queryCount = 1;
-    if (_renderPass && getSubpass()->isMultiview()) {
+    if (isInRenderPass() && getSubpass()->isMultiview()) {
         queryCount = getSubpass()->getViewCountInMetalPass(_multiviewPassIndex);
     }
 	addActivatedQueries(pQueryPool, query, queryCount);
diff --git a/MoltenVK/MoltenVK/Commands/MVKCommandEncoderState.h b/MoltenVK/MoltenVK/Commands/MVKCommandEncoderState.h
index 38a044d..c6e7280 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCommandEncoderState.h
+++ b/MoltenVK/MoltenVK/Commands/MVKCommandEncoderState.h
@@ -198,7 +198,7 @@
     void setPushConstants(uint32_t offset, MVKArrayRef<char> pushConstants);
 
     /** Sets the index of the Metal buffer used to hold the push constants. */
-    void setMTLBufferIndex(uint32_t mtlBufferIndex);
+    void setMTLBufferIndex(uint32_t mtlBufferIndex, bool pipelineStageUsesPushConstants);
 
     /** Constructs this instance for the specified command encoder. */
     MVKPushConstantsCommandEncoderState(MVKCommandEncoder* cmdEncoder,
@@ -212,6 +212,7 @@
     MVKSmallVector<char, 128> _pushConstants;
     VkShaderStageFlagBits _shaderStage;
     uint32_t _mtlBufferIndex = 0;
+	bool _pipelineStageUsesPushConstants = false;
 };
 
 
@@ -406,6 +407,7 @@
 
     // Template function that executes a lambda expression on each dirty element of
     // a vector of bindings, and marks the bindings and the vector as no longer dirty.
+	// Clear isDirty flag before operation to allow operation to possibly override.
 	template<class T, class V>
 	void encodeBinding(V& bindings,
 					   bool& bindingsDirtyFlag,
@@ -414,8 +416,9 @@
 			bindingsDirtyFlag = false;
 			for (auto& b : bindings) {
 				if (b.isDirty) {
-					mtlOperation(_cmdEncoder, b);
 					b.isDirty = false;
+					mtlOperation(_cmdEncoder, b);
+					if (b.isDirty) { bindingsDirtyFlag = true; }
 				}
 			}
 		}
diff --git a/MoltenVK/MoltenVK/Commands/MVKCommandEncoderState.mm b/MoltenVK/MoltenVK/Commands/MVKCommandEncoderState.mm
index 3cfd9c7..dfb8c1c 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCommandEncoderState.mm
+++ b/MoltenVK/MoltenVK/Commands/MVKCommandEncoderState.mm
@@ -166,18 +166,19 @@
     if (pcBuffSize > 0) { markDirty(); }
 }
 
-void MVKPushConstantsCommandEncoderState::setMTLBufferIndex(uint32_t mtlBufferIndex) {
-    if (mtlBufferIndex != _mtlBufferIndex) {
-        _mtlBufferIndex = mtlBufferIndex;
-        markDirty();
-    }
+void MVKPushConstantsCommandEncoderState::setMTLBufferIndex(uint32_t mtlBufferIndex, bool pipelineStageUsesPushConstants) {
+	if ((mtlBufferIndex != _mtlBufferIndex) || (pipelineStageUsesPushConstants != _pipelineStageUsesPushConstants)) {
+		_mtlBufferIndex = mtlBufferIndex;
+		_pipelineStageUsesPushConstants = pipelineStageUsesPushConstants;
+		markDirty();
+	}
 }
 
 // At this point, I have been marked not-dirty, under the assumption that I will make changes to the encoder.
 // However, some of the paths below decide not to actually make any changes to the encoder. In that case,
 // I should remain dirty until I actually do make encoder changes.
 void MVKPushConstantsCommandEncoderState::encodeImpl(uint32_t stage) {
-    if (_pushConstants.empty() ) { return; }
+    if ( !_pipelineStageUsesPushConstants || _pushConstants.empty() ) { return; }
 
 	_isDirty = true;	// Stay dirty until I actually decide to make a change to the encoder
 
@@ -739,7 +740,7 @@
 
 void MVKGraphicsResourcesCommandEncoderState::encodeImpl(uint32_t stage) {
 
-    MVKGraphicsPipeline* pipeline = (MVKGraphicsPipeline*)_cmdEncoder->_graphicsPipelineState.getPipeline();
+    MVKGraphicsPipeline* pipeline = (MVKGraphicsPipeline*)getPipeline();
     bool fullImageViewSwizzle = pipeline->fullImageViewSwizzle() || getDevice()->_pMetalFeatures->nativeTextureSwizzle;
     bool forTessellation = pipeline->isTessellationPipeline();
 
@@ -774,26 +775,33 @@
 	} else if (!forTessellation && stage == kMVKGraphicsStageRasterization) {
         encodeBindings(kMVKShaderStageVertex, "vertex", fullImageViewSwizzle,
                        [pipeline](MVKCommandEncoder* cmdEncoder, MVKMTLBufferBinding& b)->void {
-					       if (b.isInline) {
-                               cmdEncoder->setVertexBytes(cmdEncoder->_mtlRenderEncoder,
-                                                          b.mtlBytes,
-                                                          b.size,
-                                                          b.index);
-					       } else {
-                               [cmdEncoder->_mtlRenderEncoder setVertexBuffer: b.mtlBuffer
-                                                                       offset: b.offset
-                                                                      atIndex: b.index];
+                           // The app may have bound more vertex attribute buffers than used by the pipeline.
+                           // We must not bind those extra buffers to the shader because they might overwrite
+                           // any implicit buffers used by the pipeline.
+                           if (pipeline->isValidVertexBufferIndex(kMVKShaderStageVertex, b.index)) {
+                               if (b.isInline) {
+                                   cmdEncoder->setVertexBytes(cmdEncoder->_mtlRenderEncoder,
+                                                              b.mtlBytes,
+                                                              b.size,
+                                                              b.index);
+                               } else {
+                                   [cmdEncoder->_mtlRenderEncoder setVertexBuffer: b.mtlBuffer
+                                                                           offset: b.offset
+                                                                          atIndex: b.index];
 
-							   // Add any translated vertex bindings for this binding
-							   auto xltdVtxBindings = pipeline->getTranslatedVertexBindings();
-							   for (auto& xltdBind : xltdVtxBindings) {
-								   if (b.index == pipeline->getMetalBufferIndexForVertexAttributeBinding(xltdBind.binding)) {
-									   [cmdEncoder->_mtlRenderEncoder setVertexBuffer: b.mtlBuffer
-																			   offset: b.offset + xltdBind.translationOffset
-																			  atIndex: pipeline->getMetalBufferIndexForVertexAttributeBinding(xltdBind.translationBinding)];
-								   }
-							   }
-					       }
+                                   // Add any translated vertex bindings for this binding
+                                   auto xltdVtxBindings = pipeline->getTranslatedVertexBindings();
+                                   for (auto& xltdBind : xltdVtxBindings) {
+                                       if (b.index == pipeline->getMetalBufferIndexForVertexAttributeBinding(xltdBind.binding)) {
+                                           [cmdEncoder->_mtlRenderEncoder setVertexBuffer: b.mtlBuffer
+                                                                                   offset: b.offset + xltdBind.translationOffset
+                                                                                  atIndex: pipeline->getMetalBufferIndexForVertexAttributeBinding(xltdBind.translationBinding)];
+                                       }
+                                   }
+                               }
+                           } else {
+                               b.isDirty = true;	// We haven't written it out, so leave dirty until next time.
+						   }
                        },
                        [](MVKCommandEncoder* cmdEncoder, MVKMTLBufferBinding& b, const MVKArrayRef<uint32_t> s)->void {
                            cmdEncoder->setVertexBytes(cmdEncoder->_mtlRenderEncoder,
@@ -975,7 +983,7 @@
 
 	encodeMetalArgumentBuffer(kMVKShaderStageCompute);
 
-    MVKPipeline* pipeline = _cmdEncoder->_computePipelineState.getPipeline();
+    MVKPipeline* pipeline = getPipeline();
 	bool fullImageViewSwizzle = pipeline ? pipeline->fullImageViewSwizzle() : false;
 
     if (_resourceBindings.swizzleBufferBinding.isDirty) {
diff --git a/MoltenVK/MoltenVK/Commands/MVKCommandResourceFactory.h b/MoltenVK/MoltenVK/Commands/MVKCommandResourceFactory.h
index e73ebd1..a23dd0d 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCommandResourceFactory.h
+++ b/MoltenVK/MoltenVK/Commands/MVKCommandResourceFactory.h
@@ -113,8 +113,9 @@
 
 /**
  * Key to use for looking up cached MTLRenderPipelineState instances.
- * Indicates which attachments are used, and holds the Metal pixel formats for each
- * color attachment plus one depth/stencil attachment. Also holds the Metal sample count.
+ * Indicates which attachments are enabled and used, and holds the Metal pixel formats for
+ * each color attachment plus one depth/stencil attachment. Also holds the Metal sample count.
+ * An attachment is considered used if it is enabled and has a valid Metal pixel format.
  *
  * This structure can be used as a key in a std::map and std::unordered_map.
  */
@@ -131,6 +132,8 @@
 
     bool isAttachmentEnabled(uint32_t attIdx) { return mvkIsAnyFlagEnabled(flags, bitFlag << attIdx); }
 
+	bool isAttachmentUsed(uint32_t attIdx) { return isAttachmentEnabled(attIdx) && attachmentMTLPixelFormats[attIdx]; }
+
 	bool isAnyAttachmentEnabled() { return mvkIsAnyFlagEnabled(flags, (bitFlag << kMVKClearAttachmentCount) - 1); }
 
 	void enableLayeredRendering() { mvkEnableFlags(flags, bitFlag << kMVKClearAttachmentLayeredRenderingBitIndex); }
diff --git a/MoltenVK/MoltenVK/Commands/MVKCommandResourceFactory.mm b/MoltenVK/MoltenVK/Commands/MVKCommandResourceFactory.mm
index e0f6fb0..fffa842 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCommandResourceFactory.mm
+++ b/MoltenVK/MoltenVK/Commands/MVKCommandResourceFactory.mm
@@ -330,7 +330,7 @@
 		[msl appendLineMVK];
 		[msl appendLineMVK: @"typedef struct {"];
 		for (uint32_t caIdx = 0; caIdx < kMVKClearAttachmentDepthStencilIndex; caIdx++) {
-			if (attKey.isAttachmentEnabled(caIdx)) {
+			if (attKey.isAttachmentUsed(caIdx)) {
 				NSString* typeStr = getMTLFormatTypeString((MTLPixelFormat)attKey.attachmentMTLPixelFormats[caIdx]);
 				[msl appendFormat: @"    %@4 color%u [[color(%u)]];", typeStr, caIdx, caIdx];
 				[msl appendLineMVK];
@@ -344,7 +344,7 @@
 		[msl appendLineMVK];
 		[msl appendLineMVK: @"    ClearColorsOut ccOut;"];
 		for (uint32_t caIdx = 0; caIdx < kMVKClearAttachmentDepthStencilIndex; caIdx++) {
-			if (attKey.isAttachmentEnabled(caIdx)) {
+			if (attKey.isAttachmentUsed(caIdx)) {
 				NSString* typeStr = getMTLFormatTypeString((MTLPixelFormat)attKey.attachmentMTLPixelFormats[caIdx]);
 				[msl appendFormat: @"    ccOut.color%u = %@4(ccIn.colors[%u]);", caIdx, typeStr, caIdx];
 				[msl appendLineMVK];
@@ -371,7 +371,7 @@
 		case kMVKFormatColorFloat:
 		case kMVKFormatDepthStencil:
 		case kMVKFormatCompressed:		return @"float";
-		default:						return @"unexpected_type";
+		case kMVKFormatNone:			return @"unexpected_MTLPixelFormatInvalid";
 	}
 }
 
diff --git a/MoltenVK/MoltenVK/Commands/MVKCommandTypePools.def b/MoltenVK/MoltenVK/Commands/MVKCommandTypePools.def
index d5785ff..e8cbae6 100644
--- a/MoltenVK/MoltenVK/Commands/MVKCommandTypePools.def
+++ b/MoltenVK/MoltenVK/Commands/MVKCommandTypePools.def
@@ -78,6 +78,9 @@
 MVK_CMD_TYPE_POOLS_FROM_5_THRESHOLDS(BeginRenderPass, 1, 2, 0, 1, 2)
 MVK_CMD_TYPE_POOL(NextSubpass)
 MVK_CMD_TYPE_POOL(EndRenderPass)
+MVK_CMD_TYPE_POOLS_FROM_3_THRESHOLDS(BeginRendering, 1, 2, 4)
+MVK_CMD_TYPE_POOL(EndRendering)
+MVK_CMD_TYPE_POOL(SetSampleLocations)
 MVK_CMD_TYPE_POOLS_FROM_THRESHOLD(ExecuteCommands, 1)
 MVK_CMD_TYPE_POOLS_FROM_2_THRESHOLDS(BindDescriptorSetsStatic, 1, 4)
 MVK_CMD_TYPE_POOLS_FROM_THRESHOLD(BindDescriptorSetsDynamic, 4)
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKDevice.h b/MoltenVK/MoltenVK/GPUObjects/MVKDevice.h
index d5aa83d..9989a69 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKDevice.h
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKDevice.h
@@ -128,6 +128,10 @@
 	/** Populates the specified structure with the format properties of this device. */
 	void getFormatProperties(VkFormat format, VkFormatProperties2* pFormatProperties);
 
+	/** Populates the specified structure with the multisample properties of this device. */
+	void getMultisampleProperties(VkSampleCountFlagBits samples,
+								  VkMultisamplePropertiesEXT* pMultisampleProperties);
+
 	/** Populates the image format properties supported on this device. */
     VkResult getImageFormatProperties(VkFormat format,
                                       VkImageType type,
@@ -425,6 +429,12 @@
 	id<MTLCommandBuffer> mtlCmdBuffer = nil;
 } MVKMTLBlitEncoder;
 
+typedef enum {
+	MVKSemaphoreStyleUseMTLEvent,
+	MVKSemaphoreStyleUseMTLFence,
+	MVKSemaphoreStyleUseEmulation
+} MVKSemaphoreStyle;
+
 /** Represents a Vulkan logical GPU device, associated with a physical device. */
 class MVKDevice : public MVKDispatchableVulkanAPIObject {
 
@@ -664,6 +674,15 @@
 	/** Invalidates the memory regions. */
 	VkResult invalidateMappedMemoryRanges(uint32_t memRangeCount, const VkMappedMemoryRange* pMemRanges);
 
+	/** Returns the number of Metal render passes needed to render all views. */
+	uint32_t getMultiviewMetalPassCount(uint32_t viewMask) const;
+
+	/** Returns the first view to be rendered in the given multiview pass. */
+	uint32_t getFirstViewIndexInMetalPass(uint32_t viewMask, uint32_t passIdx) const;
+
+	/** Returns the number of views to be rendered in the given multiview pass. */
+	uint32_t getViewCountInMetalPass(uint32_t viewMask, uint32_t passIdx) const;
+
 	/** Log all performance statistics. */
 	void logPerformanceSummary();
 
@@ -752,6 +771,9 @@
 
 #pragma mark Properties directly accessible
 
+	/** The list of Vulkan extensions, indicating whether each has been enabled by the app for this device. */
+	const MVKExtensionList _enabledExtensions;
+
 	/** Device features available and enabled. */
 	const VkPhysicalDeviceFeatures _enabledFeatures;
 	const VkPhysicalDevice16BitStorageFeatures _enabledStorage16Features;
@@ -770,9 +792,8 @@
 	const VkPhysicalDeviceVertexAttributeDivisorFeaturesEXT _enabledVtxAttrDivFeatures;
 	const VkPhysicalDevicePortabilitySubsetFeaturesKHR _enabledPortabilityFeatures;
 	const VkPhysicalDeviceImagelessFramebufferFeaturesKHR _enabledImagelessFramebufferFeatures;
-
-	/** The list of Vulkan extensions, indicating whether each has been enabled by the app for this device. */
-	const MVKExtensionList _enabledExtensions;
+	const VkPhysicalDeviceDynamicRenderingFeatures _enabledDynamicRenderingFeatures;
+	const VkPhysicalDeviceSeparateDepthStencilLayoutsFeatures _enabledSeparateDepthStencilLayoutsFeatures;
 
 	/** Pointer to the Metal-specific features of the underlying physical device. */
 	const MVKPhysicalDeviceMetalFeatures* _pMetalFeatures;
@@ -847,21 +868,15 @@
 	std::mutex _rezLock;
 	std::mutex _sem4Lock;
     std::mutex _perfLock;
-    id<MTLBuffer> _globalVisibilityResultMTLBuffer;
-	id<MTLSamplerState> _defaultMTLSamplerState;
-	id<MTLBuffer> _dummyBlitMTLBuffer;
-    uint32_t _globalVisibilityQueryCount;
-    std::mutex _vizLock;
-	bool _logActivityPerformanceInline;
-	bool _isPerformanceTracking;
-	bool _isCurrentlyAutoGPUCapturing;
-
-	typedef enum {
-		VkSemaphoreStyleUseMTLEvent,
-		VkSemaphoreStyleUseMTLFence,
-		VkSemaphoreStyleUseEmulation
-	} VkSemaphoreStyle;
-	VkSemaphoreStyle _vkSemaphoreStyle;
+	std::mutex _vizLock;
+    id<MTLBuffer> _globalVisibilityResultMTLBuffer = nil;
+	id<MTLSamplerState> _defaultMTLSamplerState = nil;
+	id<MTLBuffer> _dummyBlitMTLBuffer = nil;
+	MVKSemaphoreStyle _vkSemaphoreStyle = MVKSemaphoreStyleUseEmulation;
+    uint32_t _globalVisibilityQueryCount = 0;
+	bool _logActivityPerformanceInline = false;
+	bool _isPerformanceTracking = false;
+	bool _isCurrentlyAutoGPUCapturing = false;
 
 };
 
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKDevice.mm b/MoltenVK/MoltenVK/GPUObjects/MVKDevice.mm
index 9d70f24..2d29510 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKDevice.mm
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKDevice.mm
@@ -75,6 +75,9 @@
 static const uint32_t kAMDRadeonRX6800DeviceId = 0x73bf;
 static const uint32_t kAMDRadeonRX6700DeviceId = 0x73df;
 
+static const VkExtent2D kMetalSamplePositionGridSize = { 1, 1 };
+static const VkExtent2D kMetalSamplePositionGridSizeNotSupported = { 0, 0 };
+
 #pragma clang diagnostic pop
 
 
@@ -274,6 +277,16 @@
 				imagelessFramebufferFeatures->imagelessFramebuffer = true;
 				break;
 			}
+			case VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_DYNAMIC_RENDERING_FEATURES: {
+				auto* dynamicRenderingFeatures = (VkPhysicalDeviceDynamicRenderingFeatures*)next;
+				dynamicRenderingFeatures->dynamicRendering = true;
+				break;
+			}
+			case VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SEPARATE_DEPTH_STENCIL_LAYOUTS_FEATURES: {
+				auto* separateDepthStencilLayoutsFeatures = (VkPhysicalDeviceSeparateDepthStencilLayoutsFeatures*)next;
+				separateDepthStencilLayoutsFeatures->separateDepthStencilLayouts = true;
+				break;
+			}
 			default:
 				break;
 		}
@@ -457,6 +470,16 @@
 				portabilityProps->minVertexInputBindingStrideAlignment = (uint32_t)_metalFeatures.vertexStrideAlignment;
 				break;
 			}
+			case VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SAMPLE_LOCATIONS_PROPERTIES_EXT: {
+				auto* sampLocnProps = (VkPhysicalDeviceSampleLocationsPropertiesEXT*)next;
+				sampLocnProps->sampleLocationSampleCounts = _metalFeatures.supportedSampleCounts;
+				sampLocnProps->maxSampleLocationGridSize = kMetalSamplePositionGridSize;
+				sampLocnProps->sampleLocationCoordinateRange[0] = 0.0;
+				sampLocnProps->sampleLocationCoordinateRange[1] = (15.0 / 16.0);
+				sampLocnProps->sampleLocationSubPixelBits = 4;
+				sampLocnProps->variableSampleLocations = VK_FALSE;
+				break;
+			}
 			default:
 				break;
 		}
@@ -526,6 +549,15 @@
 	getFormatProperties(format, &pFormatProperties->formatProperties);
 }
 
+void MVKPhysicalDevice::getMultisampleProperties(VkSampleCountFlagBits samples,
+												 VkMultisamplePropertiesEXT* pMultisampleProperties) {
+	if (pMultisampleProperties) {
+		pMultisampleProperties->maxSampleLocationGridSize = (mvkIsOnlyAnyFlagEnabled(samples, _metalFeatures.supportedSampleCounts)
+															 ? kMetalSamplePositionGridSize
+															 : kMetalSamplePositionGridSizeNotSupported);
+	}
+}
+
 VkResult MVKPhysicalDevice::getImageFormatProperties(VkFormat format,
 													 VkImageType type,
 													 VkImageTiling tiling,
@@ -709,6 +741,7 @@
 VkResult MVKPhysicalDevice::getImageFormatProperties(const VkPhysicalDeviceImageFormatInfo2 *pImageFormatInfo,
 													 VkImageFormatProperties2* pImageFormatProperties) {
 
+	auto usage = pImageFormatInfo->usage;
 	for (const auto* nextInfo = (VkBaseInStructure*)pImageFormatInfo->pNext; nextInfo; nextInfo = nextInfo->pNext) {
 		switch (nextInfo->sType) {
 			case VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_EXTERNAL_IMAGE_FORMAT_INFO: {
@@ -723,6 +756,13 @@
 				}
 				break;
 			}
+			case VK_STRUCTURE_TYPE_IMAGE_STENCIL_USAGE_CREATE_INFO: {
+				// If the format includes a stencil component, combine any separate stencil usage with non-stencil usage.
+				if (_pixelFormats.isStencilFormat(_pixelFormats.getMTLPixelFormat(pImageFormatInfo->format))) {
+					usage |= ((VkImageStencilUsageCreateInfo*)nextInfo)->stencilUsage;
+				}
+				break;
+			}
 			default:
 				break;
 		}
@@ -743,7 +783,7 @@
 	if ( !_pixelFormats.isSupported(pImageFormatInfo->format) ) { return VK_ERROR_FORMAT_NOT_SUPPORTED; }
 
 	return getImageFormatProperties(pImageFormatInfo->format, pImageFormatInfo->type,
-									pImageFormatInfo->tiling, pImageFormatInfo->usage,
+									pImageFormatInfo->tiling, usage,
 									pImageFormatInfo->flags,
 									&pImageFormatProperties->imageFormatProperties);
 }
@@ -1519,9 +1559,12 @@
 
 #endif
 
-    // Note the selector name, which is different from the property name.
+	if ( [_mtlDevice respondsToSelector: @selector(areProgrammableSamplePositionsSupported)] ) {
+		_metalFeatures.programmableSamplePositions = _mtlDevice.areProgrammableSamplePositionsSupported;
+	}
+
     if ( [_mtlDevice respondsToSelector: @selector(areRasterOrderGroupsSupported)] ) {
-        _metalFeatures.rasterOrderGroups = _mtlDevice.rasterOrderGroupsSupported;
+        _metalFeatures.rasterOrderGroups = _mtlDevice.areRasterOrderGroupsSupported;
     }
 #if MVK_XCODE_12
 	if ( [_mtlDevice respondsToSelector: @selector(supportsPullModelInterpolation)] ) {
@@ -2738,6 +2781,9 @@
 	if (!_metalFeatures.samplerMirrorClampToEdge) {
 		pWritableExtns->vk_KHR_sampler_mirror_clamp_to_edge.enabled = false;
 	}
+	if (!_metalFeatures.programmableSamplePositions) {
+		pWritableExtns->vk_EXT_sample_locations.enabled = false;
+	}
 	if (!_metalFeatures.rasterOrderGroups) {
 		pWritableExtns->vk_EXT_fragment_shader_interlock.enabled = false;
 	}
@@ -2766,6 +2812,14 @@
 	@autoreleasepool {
 		if (_metalFeatures.counterSamplingPoints) {
 			NSArray<id<MTLCounterSet>>* counterSets = _mtlDevice.counterSets;
+
+			// Workaround for a bug in Intel Iris Plus Graphics driver where the counterSets
+			// array is not properly retained internally, and becomes a zombie when counterSets
+			// is called more than once, which occurs when an app creates more than one VkInstance.
+			// This workaround will cause a very small memory leak on systems that do not have this
+			// bug, so we apply the workaround only when absolutely needed for specific devices.
+			if (_properties.vendorID == kIntelVendorId && _properties.deviceID == 0x8a53) { [counterSets retain]; }
+
 			for (id<MTLCounterSet> cs in counterSets){
 				NSString* csName = cs.name;
 				if ( [csName caseInsensitiveCompare: MTLCommonCounterSetTimestamp] == NSOrderedSame) {
@@ -3288,7 +3342,7 @@
 	}
 
 	if ((pTypeCreateInfo && pTypeCreateInfo->semaphoreType == VK_SEMAPHORE_TYPE_TIMELINE) ||
-		(pExportInfo && mvkIsAnyFlagEnabled(pExportInfo->exportObjectTypes, VK_EXPORT_METAL_OBJECT_TYPE_METAL_SHARED_EVENT_BIT_EXT)) ||
+		(pExportInfo && pExportInfo->exportObjectType == VK_EXPORT_METAL_OBJECT_TYPE_METAL_SHARED_EVENT_BIT_EXT) ||
 		 pImportInfo) {
 		if (_pMetalFeatures->events) {
 			return new MVKTimelineSemaphoreMTLEvent(this, pCreateInfo, pTypeCreateInfo, pExportInfo, pImportInfo);
@@ -3297,9 +3351,9 @@
 		}
 	} else {
 		switch (_vkSemaphoreStyle) {
-			case VkSemaphoreStyleUseMTLEvent:  return new MVKSemaphoreMTLEvent(this, pCreateInfo);
-			case VkSemaphoreStyleUseMTLFence:  return new MVKSemaphoreMTLFence(this, pCreateInfo);
-			case VkSemaphoreStyleUseEmulation: return new MVKSemaphoreEmulated(this, pCreateInfo);
+			case MVKSemaphoreStyleUseMTLEvent:  return new MVKSemaphoreMTLEvent(this, pCreateInfo);
+			case MVKSemaphoreStyleUseMTLFence:  return new MVKSemaphoreMTLFence(this, pCreateInfo);
+			case MVKSemaphoreStyleUseEmulation: return new MVKSemaphoreEmulated(this, pCreateInfo);
 		}
 	}
 }
@@ -3733,6 +3787,57 @@
 	}
 }
 
+uint32_t MVKDevice::getMultiviewMetalPassCount(uint32_t viewMask) const {
+	if ( !viewMask ) { return 0; }
+	if ( !_physicalDevice->canUseInstancingForMultiview() ) {
+		// If we can't use instanced drawing for this, we'll have to unroll the render pass.
+		return __builtin_popcount(viewMask);
+	}
+	uint32_t mask = viewMask;
+	uint32_t count;
+	// Step through each clump until there are no more clumps. I'll know this has
+	// happened when the mask becomes 0, since mvkGetNextViewMaskGroup() clears each group of bits
+	// as it finds them, and returns the remainder of the mask.
+	for (count = 0; mask != 0; ++count) {
+		mask = mvkGetNextViewMaskGroup(mask, nullptr, nullptr);
+	}
+	return count;
+}
+
+uint32_t MVKDevice::getFirstViewIndexInMetalPass(uint32_t viewMask, uint32_t passIdx) const {
+	if ( !viewMask ) { return 0; }
+	assert(passIdx < getMultiviewMetalPassCount(viewMask));
+	uint32_t mask = viewMask;
+	uint32_t startView = 0, viewCount = 0;
+	if ( !_physicalDevice->canUseInstancingForMultiview() ) {
+		for (uint32_t i = 0; mask != 0; ++i) {
+			mask = mvkGetNextViewMaskGroup(mask, &startView, &viewCount);
+			while (passIdx-- > 0 && viewCount-- > 0) {
+				startView++;
+			}
+		}
+	} else {
+		for (uint32_t i = 0; i <= passIdx; ++i) {
+			mask = mvkGetNextViewMaskGroup(mask, &startView, nullptr);
+		}
+	}
+	return startView;
+}
+
+uint32_t MVKDevice::getViewCountInMetalPass(uint32_t viewMask, uint32_t passIdx) const {
+	if ( !viewMask ) { return 0; }
+	assert(passIdx < getMultiviewMetalPassCount(viewMask));
+	if ( !_physicalDevice->canUseInstancingForMultiview() ) {
+		return 1;
+	}
+	uint32_t mask = viewMask;
+	uint32_t viewCount = 0;
+	for (uint32_t i = 0; i <= passIdx; ++i) {
+		mask = mvkGetNextViewMaskGroup(mask, nullptr, &viewCount);
+	}
+	return viewCount;
+}
+
 
 #pragma mark Metal
 
@@ -3915,7 +4020,7 @@
 			}
 			case VK_STRUCTURE_TYPE_EXPORT_METAL_TEXTURE_INFO_EXT: {
 				auto* pImgInfo = (VkExportMetalTextureInfoEXT*)next;
-				uint8_t planeIndex = MVKImage::getPlaneFromVkImageAspectFlags(pImgInfo->aspectMask);
+				uint8_t planeIndex = MVKImage::getPlaneFromVkImageAspectFlags(pImgInfo->plane);
 				auto* mvkImg = (MVKImage*)pImgInfo->image;
 				auto* mvkImgView = (MVKImageView*)pImgInfo->imageView;
 				auto* mvkBuffView = (MVKBufferView*)pImgInfo->bufferView;
@@ -3972,10 +4077,11 @@
 	_enabledPrivateDataFeatures(),
 	_enabledPortabilityFeatures(),
 	_enabledImagelessFramebufferFeatures(),
-	_enabledExtensions(this),
-	_isCurrentlyAutoGPUCapturing(false)
-{
-	// If the physical device is lost, bail.
+	_enabledDynamicRenderingFeatures(),
+	_enabledSeparateDepthStencilLayoutsFeatures(),
+	_enabledExtensions(this) {
+
+		// If the physical device is lost, bail.
 	if (physicalDevice->getConfigurationResult() != VK_SUCCESS) {
 		setConfigurationResult(physicalDevice->getConfigurationResult());
 		return;
@@ -3983,8 +4089,8 @@
 
 	initPerformanceTracking();
 	initPhysicalDevice(physicalDevice, pCreateInfo);
-	enableFeatures(pCreateInfo);
 	enableExtensions(pCreateInfo);
+	enableFeatures(pCreateInfo);
 	initQueues(pCreateInfo);
 	reservePrivateData(pCreateInfo);
 
@@ -4066,15 +4172,15 @@
 	bool isRosetta2 = _pProperties->vendorID == kAppleVendorId && !MVK_APPLE_SILICON;
 	bool canUseMTLEventForSem4 = _pMetalFeatures->events && mvkConfig().semaphoreUseMTLEvent && !(isRosetta2 || isNVIDIA);
 	bool canUseMTLFenceForSem4 = _pMetalFeatures->fences && mvkConfig().semaphoreUseMTLFence;
-	_vkSemaphoreStyle = canUseMTLEventForSem4 ? VkSemaphoreStyleUseMTLEvent : (canUseMTLFenceForSem4 ? VkSemaphoreStyleUseMTLFence : VkSemaphoreStyleUseEmulation);
+	_vkSemaphoreStyle = canUseMTLEventForSem4 ? MVKSemaphoreStyleUseMTLEvent : (canUseMTLFenceForSem4 ? MVKSemaphoreStyleUseMTLFence : MVKSemaphoreStyleUseEmulation);
 	switch (_vkSemaphoreStyle) {
-		case VkSemaphoreStyleUseMTLEvent:
+		case MVKSemaphoreStyleUseMTLEvent:
 			MVKLogInfo("Using MTLEvent for Vulkan semaphores.");
 			break;
-		case VkSemaphoreStyleUseMTLFence:
+		case MVKSemaphoreStyleUseMTLFence:
 			MVKLogInfo("Using MTLFence for Vulkan semaphores.");
 			break;
-		case VkSemaphoreStyleUseEmulation:
+		case MVKSemaphoreStyleUseEmulation:
 			MVKLogInfo("Using emulation for Vulkan semaphores.");
 			break;
 	}
@@ -4100,10 +4206,20 @@
 	mvkClear(&_enabledVtxAttrDivFeatures);
 	mvkClear(&_enabledPortabilityFeatures);
 	mvkClear(&_enabledImagelessFramebufferFeatures);
+	mvkClear(&_enabledDynamicRenderingFeatures);
+	mvkClear(&_enabledSeparateDepthStencilLayoutsFeatures);
+
+	VkPhysicalDeviceSeparateDepthStencilLayoutsFeatures pdSeparateDepthStencilLayoutsFeatures;
+	pdSeparateDepthStencilLayoutsFeatures.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SEPARATE_DEPTH_STENCIL_LAYOUTS_FEATURES;
+	pdSeparateDepthStencilLayoutsFeatures.pNext = nullptr;
+
+	VkPhysicalDeviceDynamicRenderingFeatures pdDynamicRenderingFeatures;
+	pdDynamicRenderingFeatures.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_DYNAMIC_RENDERING_FEATURES;
+	pdDynamicRenderingFeatures.pNext = &pdSeparateDepthStencilLayoutsFeatures;
 
 	VkPhysicalDeviceImagelessFramebufferFeaturesKHR pdImagelessFramebufferFeatures;
 	pdImagelessFramebufferFeatures.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_IMAGELESS_FRAMEBUFFER_FEATURES;
-	pdImagelessFramebufferFeatures.pNext = NULL;
+	pdImagelessFramebufferFeatures.pNext = &pdDynamicRenderingFeatures;
     
 	// Fetch the available physical device features.
 	VkPhysicalDevicePortabilitySubsetFeaturesKHR pdPortabilityFeatures;
@@ -4300,6 +4416,20 @@
 							   &pdImagelessFramebufferFeatures.imagelessFramebuffer, 1);
 				break;
 			}
+			case VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_DYNAMIC_RENDERING_FEATURES: {
+				auto* requestedFeatures = (VkPhysicalDeviceDynamicRenderingFeatures*)next;
+				enableFeatures(&_enabledDynamicRenderingFeatures.dynamicRendering,
+							   &requestedFeatures->dynamicRendering,
+							   &pdDynamicRenderingFeatures.dynamicRendering, 1);
+				break;
+			}
+			case VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SEPARATE_DEPTH_STENCIL_LAYOUTS_FEATURES: {
+				auto* requestedFeatures = (VkPhysicalDeviceSeparateDepthStencilLayoutsFeatures*)next;
+				enableFeatures(&_enabledSeparateDepthStencilLayoutsFeatures.separateDepthStencilLayouts,
+							   &requestedFeatures->separateDepthStencilLayouts,
+							   &pdSeparateDepthStencilLayoutsFeatures.separateDepthStencilLayouts, 1);
+				break;
+			}
 			default:
 				break;
 		}
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKDeviceMemory.mm b/MoltenVK/MoltenVK/GPUObjects/MVKDeviceMemory.mm
index 7a92dd6..a397e3c 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKDeviceMemory.mm
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKDeviceMemory.mm
@@ -314,7 +314,7 @@
 			}
 			case VK_STRUCTURE_TYPE_EXPORT_METAL_OBJECT_CREATE_INFO_EXT: {
 				const auto* pExportInfo = (VkExportMetalObjectCreateInfoEXT*)next;
-				willExportMTLBuffer = mvkIsAnyFlagEnabled(pExportInfo->exportObjectTypes, VK_EXPORT_METAL_OBJECT_TYPE_METAL_BUFFER_BIT_EXT);
+				willExportMTLBuffer = pExportInfo->exportObjectType == VK_EXPORT_METAL_OBJECT_TYPE_METAL_BUFFER_BIT_EXT;
 				break;
 			}
 			default:
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKFramebuffer.h b/MoltenVK/MoltenVK/GPUObjects/MVKFramebuffer.h
index 5fe7418..60aeb42 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKFramebuffer.h
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKFramebuffer.h
@@ -70,3 +70,11 @@
 	uint32_t _layerCount;
 };
 
+
+#pragma mark -
+#pragma mark Support functions
+
+/** Returns an image-less MVKFramebuffer object created from the rendering info. */
+MVKFramebuffer* mvkCreateFramebuffer(MVKDevice* device,
+									 const VkRenderingInfo* pRenderingInfo,
+									 MVKRenderPass* mvkRenderPass);
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKFramebuffer.mm b/MoltenVK/MoltenVK/GPUObjects/MVKFramebuffer.mm
index 66b7470..7bf9f47 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKFramebuffer.mm
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKFramebuffer.mm
@@ -85,8 +85,8 @@
     _extent = { .width = pCreateInfo->width, .height = pCreateInfo->height };
 	_layerCount = pCreateInfo->layers;
 
-	if (!(pCreateInfo->flags & VK_FRAMEBUFFER_CREATE_IMAGELESS_BIT_KHR)) {
-		// Add attachments
+	// If this is not an image-less framebuffer, add the attachments
+	if ( !mvkIsAnyFlagEnabled(pCreateInfo->flags, VK_FRAMEBUFFER_CREATE_IMAGELESS_BIT) ) {
 		_attachments.reserve(pCreateInfo->attachmentCount);
 		for (uint32_t i = 0; i < pCreateInfo->attachmentCount; i++) {
 			_attachments.push_back((MVKImageView*)pCreateInfo->pAttachments[i]);
@@ -98,3 +98,47 @@
 	[_mtlDummyTex release];
 }
 
+
+#pragma mark -
+#pragma mark Support functions
+
+MVKFramebuffer* mvkCreateFramebuffer(MVKDevice* device,
+									 const VkRenderingInfo* pRenderingInfo,
+									 MVKRenderPass* mvkRenderPass) {
+	uint32_t attCnt = 0;
+	VkExtent3D fbExtent = {};
+	for (uint32_t caIdx = 0; caIdx < pRenderingInfo->colorAttachmentCount; caIdx++) {
+		auto& clrAtt = pRenderingInfo->pColorAttachments[caIdx];
+		if (clrAtt.imageView) {
+			fbExtent = ((MVKImageView*)clrAtt.imageView)->getExtent3D();
+			attCnt++;
+			if (clrAtt.resolveImageView && clrAtt.resolveMode != VK_RESOLVE_MODE_NONE) {
+				attCnt++;
+			}
+		}
+	}
+	auto* pDSAtt = pRenderingInfo->pDepthAttachment ? pRenderingInfo->pDepthAttachment : pRenderingInfo->pStencilAttachment;
+	if (pDSAtt) {
+		if (pDSAtt->imageView) {
+			fbExtent = ((MVKImageView*)pDSAtt->imageView)->getExtent3D();
+			attCnt++;
+		}
+		if (pDSAtt->resolveImageView && pDSAtt->resolveMode != VK_RESOLVE_MODE_NONE) {
+			attCnt++;
+		}
+	}
+
+	VkFramebufferCreateInfo fbCreateInfo;
+	fbCreateInfo.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO;
+	fbCreateInfo.pNext = nullptr;
+	fbCreateInfo.flags = VK_FRAMEBUFFER_CREATE_IMAGELESS_BIT;
+	fbCreateInfo.renderPass = (VkRenderPass)mvkRenderPass;
+	fbCreateInfo.attachmentCount = attCnt;
+	fbCreateInfo.pAttachments = nullptr;
+	fbCreateInfo.width = fbExtent.width;
+	fbCreateInfo.height = fbExtent.height;
+	fbCreateInfo.layers = pRenderingInfo->layerCount;
+
+	return device->createFramebuffer(&fbCreateInfo, nullptr);
+}
+
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKImage.h b/MoltenVK/MoltenVK/GPUObjects/MVKImage.h
index 01decf8..b56eeb0 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKImage.h
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKImage.h
@@ -344,6 +344,7 @@
 	void initExternalMemory(VkExternalMemoryHandleTypeFlags handleTypes);
     void releaseIOSurface();
 	bool getIsValidViewFormat(VkFormat viewFormat);
+	VkImageUsageFlags getCombinedUsage() { return _usage | _stencilUsage; }
 	MTLTextureUsage getMTLTextureUsage(MTLPixelFormat mtlPixFmt);
 
     MVKSmallVector<MVKImageMemoryBinding*, 3> _memoryBindings;
@@ -354,6 +355,7 @@
     uint32_t _arrayLayers;
     VkSampleCountFlagBits _samples;
     VkImageUsageFlags _usage;
+	VkImageUsageFlags _stencilUsage;
     VkFormat _vkFormat;
 	MTLTextureType _mtlTextureType;
     std::mutex _lock;
@@ -570,7 +572,13 @@
 
 	/** Returns the Metal pixel format of this image view. */
 	MTLPixelFormat getMTLPixelFormat(uint8_t planeIndex = 0) { return planeIndex < _planes.size() ? _planes[planeIndex]->_mtlPixFmt : MTLPixelFormatInvalid; }	// Guard against destroyed instance retained in a descriptor.
-    
+
+	/** Returns the Vulkan pixel format of this image view. */
+	VkFormat getVkFormat(uint8_t planeIndex = 0) { return getPixelFormats()->getVkFormat(getMTLPixelFormat(planeIndex)); }
+
+	/** Returns the number of samples for each pixel of this image view. */
+	VkSampleCountFlagBits getSampleCount() { return _image->getSampleCount(); }
+
     /** Returns the packed component swizzle of this image view. */
     uint32_t getPackedSwizzle() { return _planes.empty() ? 0 : _planes[0]->getPackedSwizzle(); }	// Guard against destroyed instance retained in a descriptor.
     
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKImage.mm b/MoltenVK/MoltenVK/GPUObjects/MVKImage.mm
index 798afef..8340095 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKImage.mm
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKImage.mm
@@ -392,7 +392,7 @@
         switch (next->sType) {
         case VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS: {
             auto* dedicatedReqs = (VkMemoryDedicatedRequirements*)next;
-            bool writable = mvkIsAnyFlagEnabled(_image->_usage, VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT);
+            bool writable = mvkIsAnyFlagEnabled(_image->getCombinedUsage(), VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT);
             bool canUseTexelBuffer = _device->_pMetalFeatures->texelBuffers && _image->_isLinear && !_image->getIsCompressed();
             dedicatedReqs->requiresDedicatedAllocation = _requiresDedicatedMemoryAllocation;
             dedicatedReqs->prefersDedicatedAllocation = (dedicatedReqs->requiresDedicatedAllocation ||
@@ -609,7 +609,7 @@
     imgData.mipLevels = _mipLevels;
     imgData.arrayLayers = _arrayLayers;
     imgData.samples = _samples;
-    imgData.usage = _usage;
+    imgData.usage = getCombinedUsage();
 }
 
 // Returns whether an MVKImageView can have the specified format.
@@ -651,8 +651,8 @@
     // Only transient attachments may use memoryless storage.
 	// Using memoryless as an input attachment requires shader framebuffer fetch, which MoltenVK does not support yet.
 	// TODO: support framebuffer fetch so VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT uses color(m) in shader instead of setFragmentTexture:, which crashes Metal
-    if (!mvkIsAnyFlagEnabled(_usage, VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT) ||
-		 mvkIsAnyFlagEnabled(_usage, VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT) ) {
+    if (!mvkIsAnyFlagEnabled(getCombinedUsage(), VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT) ||
+		 mvkIsAnyFlagEnabled(getCombinedUsage(), VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT) ) {
         mvkDisableFlags(pMemoryRequirements->memoryTypeBits, getPhysicalDevice()->getLazilyAllocatedMemoryTypes());
     }
 
@@ -729,6 +729,7 @@
 	_samples = mvkVkSampleCountFlagBitsFromSampleCount(mtlTexture.sampleCount);
 	_arrayLayers = uint32_t(mtlTexture.arrayLength);
 	_usage = getPixelFormats()->getVkImageUsageFlags(mtlTexture.usage, mtlTexture.pixelFormat);
+	_stencilUsage = _usage;
 
 	if (_device->_pMetalFeatures->ioSurfaces) {
 		_ioSurface = mtlTexture.iosurface;
@@ -750,7 +751,8 @@
 VkResult MVKImage::useIOSurface(IOSurfaceRef ioSurface) {
 	lock_guard<mutex> lock(_lock);
 
-	if (_ioSurface == ioSurface) { return VK_SUCCESS; }
+	// Don't recreate existing. But special case of incoming nil if already nil means create a new IOSurface.
+	if (ioSurface && _ioSurface == ioSurface) { return VK_SUCCESS; }
 
     if (!_device->_pMetalFeatures->ioSurfaces) { return reportError(VK_ERROR_FEATURE_NOT_PRESENT, "vkUseIOSurfaceMVK() : IOSurfaces are not supported on this platform."); }
 
@@ -807,8 +809,10 @@
                     });
                 }
                 CFDictionaryAddValue(properties, (id)kIOSurfacePlaneInfo, planeProperties);
+                CFRelease(planeProperties);
             }
             _ioSurface = IOSurfaceCreate(properties);
+            CFRelease(properties);
         }
     }
 
@@ -852,7 +856,7 @@
 		needsReinterpretation = needsReinterpretation || !pixFmts->compatibleAsLinearOrSRGB(mtlPixFmt, viewFmt);
 	}
 
-	MTLTextureUsage mtlUsage = pixFmts->getMTLTextureUsage(_usage, mtlPixFmt, _samples, _isLinear, needsReinterpretation, _hasExtendedUsage);
+	MTLTextureUsage mtlUsage = pixFmts->getMTLTextureUsage(getCombinedUsage(), mtlPixFmt, _samples, _isLinear, needsReinterpretation, _hasExtendedUsage);
 
 	// Metal before 3.0 doesn't support 3D compressed textures, so we'll
 	// decompress the texture ourselves, and we need to be able to write to it.
@@ -869,6 +873,10 @@
 MVKImage::MVKImage(MVKDevice* device, const VkImageCreateInfo* pCreateInfo) : MVKVulkanAPIDeviceObject(device) {
 	_ioSurface = nil;
 
+	// Stencil usage is implied to be the same as usage, unless overridden in the pNext chain.
+	_usage = pCreateInfo->usage;
+	_stencilUsage = _usage;
+
 	const VkExternalMemoryImageCreateInfo* pExtMemInfo = nullptr;
 	for (const auto* next = (const VkBaseInStructure*)pCreateInfo->pNext; next; next = next->pNext) {
 		switch (next->sType) {
@@ -884,6 +892,9 @@
 				}
 				break;
 			}
+			case VK_STRUCTURE_TYPE_IMAGE_STENCIL_USAGE_CREATE_INFO:
+				_stencilUsage = ((VkImageStencilUsageCreateInfo*)next)->stencilUsage;
+				break;
 			default:
 				break;
 		}
@@ -912,7 +923,6 @@
 
 	MVKPixelFormats* pixFmts = getPixelFormats();
     _vkFormat = pCreateInfo->format;
-	_usage = pCreateInfo->usage;
     _isAliasable = mvkIsAnyFlagEnabled(pCreateInfo->flags, VK_IMAGE_CREATE_ALIAS_BIT);
 	_hasMutableFormat = mvkIsAnyFlagEnabled(pCreateInfo->flags, VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT);
 	_hasExtendedUsage = mvkIsAnyFlagEnabled(pCreateInfo->flags, VK_IMAGE_CREATE_EXTENDED_USAGE_BIT);
@@ -920,7 +930,7 @@
     // If this is a storage image of format R32_UINT or R32_SINT, or MUTABLE_FORMAT is set
     // and R32_UINT is in the set of possible view formats, then we must use a texel buffer,
     // or image atomics won't work.
-	_isLinearForAtomics = (_isLinear && mvkIsAnyFlagEnabled(_usage, VK_IMAGE_USAGE_STORAGE_BIT) &&
+	_isLinearForAtomics = (_isLinear && mvkIsAnyFlagEnabled(getCombinedUsage(), VK_IMAGE_USAGE_STORAGE_BIT) &&
 						   ((_vkFormat == VK_FORMAT_R32_UINT || _vkFormat == VK_FORMAT_R32_SINT) ||
 							(_hasMutableFormat && pixFmts->getViewClass(_vkFormat) == MVKMTLViewClass::Color32 &&
 							 (getIsValidViewFormat(VK_FORMAT_R32_UINT) || getIsValidViewFormat(VK_FORMAT_R32_SINT)))));
@@ -985,11 +995,12 @@
 
 	// Setting Metal objects directly will override Vulkan settings.
 	// It is responsibility of app to ensure these are consistent. Not doing so results in undefined behavior.
+	const VkExportMetalObjectCreateInfoEXT* pExportInfo = nullptr;
 	for (const auto* next = (const VkBaseInStructure*)pCreateInfo->pNext; next; next = next->pNext) {
 		switch (next->sType) {
 			case VK_STRUCTURE_TYPE_IMPORT_METAL_TEXTURE_INFO_EXT: {
 				const auto* pMTLTexInfo = (VkImportMetalTextureInfoEXT*)next;
-				uint8_t planeIndex = MVKImage::getPlaneFromVkImageAspectFlags(pMTLTexInfo->aspectMask);
+				uint8_t planeIndex = MVKImage::getPlaneFromVkImageAspectFlags(pMTLTexInfo->plane);
 				setConfigurationResult(setMTLTexture(planeIndex, pMTLTexInfo->mtlTexture));
 				break;
 			}
@@ -998,10 +1009,19 @@
 				setConfigurationResult(useIOSurface(pIOSurfInfo->ioSurface));
 				break;
 			}
+			case VK_STRUCTURE_TYPE_EXPORT_METAL_OBJECT_CREATE_INFO_EXT:
+				pExportInfo = (VkExportMetalObjectCreateInfoEXT*)next;
+				break;
 			default:
 				break;
 		}
 	}
+
+	// If we're expecting to export an IOSurface, and weren't give one,
+	// base this image on a new IOSurface that matches its configuration.
+	if (pExportInfo && pExportInfo->exportObjectType == VK_EXPORT_METAL_OBJECT_TYPE_METAL_IOSURFACE_BIT_EXT && !_ioSurface) {
+		setConfigurationResult(useIOSurface(nil));
+	}
 }
 
 VkSampleCountFlagBits MVKImage::validateSamples(const VkImageCreateInfo* pCreateInfo, bool isAttachment) {
@@ -1774,12 +1794,22 @@
 
 MVKImageView::MVKImageView(MVKDevice* device, const VkImageViewCreateInfo* pCreateInfo) : MVKVulkanAPIDeviceObject(device) {
 	_image = (MVKImage*)pCreateInfo->image;
-	// Transfer commands don't use image views.
-	_usage = _image->_usage;
-	mvkDisableFlags(_usage, (VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT));
     _mtlTextureType = mvkMTLTextureTypeFromVkImageViewType(pCreateInfo->viewType,
 														   _image->getSampleCount() != VK_SAMPLE_COUNT_1_BIT);
 
+	// Per spec, for depth/stencil formats, determine the appropriate usage
+	// based on whether stencil or depth or both aspects are being used.
+	VkImageAspectFlags aspectMask = pCreateInfo->subresourceRange.aspectMask;
+	if (mvkAreAllFlagsEnabled(aspectMask, VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_DEPTH_BIT)) {
+		_usage = _image->_usage & _image->_stencilUsage;
+	} else if (mvkIsAnyFlagEnabled(aspectMask, VK_IMAGE_ASPECT_STENCIL_BIT)) {
+		_usage = _image->_stencilUsage;
+	} else {
+		_usage = _image->_usage;
+	}
+	// Image views can't be used in transfer commands.
+	mvkDisableFlags(_usage, (VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT));
+
 	for (const auto* next = (VkBaseInStructure*)pCreateInfo->pNext; next; next = next->pNext) {
 		switch (next->sType) {
 			case VK_STRUCTURE_TYPE_IMAGE_VIEW_USAGE_CREATE_INFO: {
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKInstance.mm b/MoltenVK/MoltenVK/GPUObjects/MVKInstance.mm
index 9d3fce1..9c9f197 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKInstance.mm
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKInstance.mm
@@ -630,6 +630,8 @@
 	ADD_DVC_EXT_ENTRY_POINT(vkCmdBeginRenderPass2KHR, KHR_CREATE_RENDERPASS_2);
 	ADD_DVC_EXT_ENTRY_POINT(vkCmdNextSubpass2KHR, KHR_CREATE_RENDERPASS_2);
 	ADD_DVC_EXT_ENTRY_POINT(vkCmdEndRenderPass2KHR, KHR_CREATE_RENDERPASS_2);
+	ADD_DVC_EXT_ENTRY_POINT(vkCmdBeginRenderingKHR, KHR_DYNAMIC_RENDERING);
+	ADD_DVC_EXT_ENTRY_POINT(vkCmdEndRenderingKHR, KHR_DYNAMIC_RENDERING);
 	ADD_DVC_EXT_ENTRY_POINT(vkCmdPushDescriptorSetKHR, KHR_PUSH_DESCRIPTOR);
 	ADD_DVC_EXT2_ENTRY_POINT(vkCmdPushDescriptorSetWithTemplateKHR, KHR_PUSH_DESCRIPTOR, KHR_DESCRIPTOR_UPDATE_TEMPLATE);
 	ADD_DVC_EXT_ENTRY_POINT(vkCreateSwapchainKHR, KHR_SWAPCHAIN);
@@ -651,13 +653,15 @@
 	ADD_DVC_EXT_ENTRY_POINT(vkCmdDebugMarkerInsertEXT, EXT_DEBUG_MARKER);
 	ADD_DVC_EXT_ENTRY_POINT(vkSetHdrMetadataEXT, EXT_HDR_METADATA);
 	ADD_DVC_EXT_ENTRY_POINT(vkResetQueryPoolEXT, EXT_HOST_QUERY_RESET);
+	ADD_DVC_EXT_ENTRY_POINT(vkExportMetalObjectsEXT, EXT_METAL_OBJECTS);
 	ADD_DVC_EXT_ENTRY_POINT(vkCreatePrivateDataSlotEXT, EXT_PRIVATE_DATA);
 	ADD_DVC_EXT_ENTRY_POINT(vkDestroyPrivateDataSlotEXT, EXT_PRIVATE_DATA);
 	ADD_DVC_EXT_ENTRY_POINT(vkGetPrivateDataEXT, EXT_PRIVATE_DATA);
 	ADD_DVC_EXT_ENTRY_POINT(vkSetPrivateDataEXT, EXT_PRIVATE_DATA);
+	ADD_DVC_EXT_ENTRY_POINT(vkGetPhysicalDeviceMultisamplePropertiesEXT, EXT_SAMPLE_LOCATIONS);
+	ADD_DVC_EXT_ENTRY_POINT(vkCmdSetSampleLocationsEXT, EXT_SAMPLE_LOCATIONS);
 	ADD_DVC_EXT_ENTRY_POINT(vkGetRefreshCycleDurationGOOGLE, GOOGLE_DISPLAY_TIMING);
 	ADD_DVC_EXT_ENTRY_POINT(vkGetPastPresentationTimingGOOGLE, GOOGLE_DISPLAY_TIMING);
-	ADD_DVC_EXT_ENTRY_POINT(vkExportMetalObjectsEXT, EXT_METAL_OBJECTS);
 
 }
 
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKPipeline.h b/MoltenVK/MoltenVK/GPUObjects/MVKPipeline.h
index a7f271d..88b71b1 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKPipeline.h
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKPipeline.h
@@ -112,13 +112,13 @@
 #pragma mark -
 #pragma mark MVKPipeline
 
-static const uint32_t kMVKTessCtlInputBufferIndex = 30;
 static const uint32_t kMVKTessCtlNumReservedBuffers = 1;
+static const uint32_t kMVKTessCtlInputBufferBinding = 0;
 
-static const uint32_t kMVKTessEvalInputBufferIndex = 30;
-static const uint32_t kMVKTessEvalPatchInputBufferIndex = 29;
-static const uint32_t kMVKTessEvalLevelBufferIndex = 28;
 static const uint32_t kMVKTessEvalNumReservedBuffers = 3;
+static const uint32_t kMVKTessEvalInputBufferBinding = 0;
+static const uint32_t kMVKTessEvalPatchInputBufferBinding = 1;
+static const uint32_t kMVKTessEvalLevelBufferBinding = 2;
 
 /** Represents an abstract Vulkan pipeline. */
 class MVKPipeline : public MVKVulkanAPIDeviceObject {
@@ -203,9 +203,6 @@
 	MVKBitArray stages[4] = {};
 };
 
-/** The number of dynamic states possible in Vulkan. */
-static const uint32_t kMVKVkDynamicStateCount = 32;
-
 /** Represents an Vulkan graphics pipeline. */
 class MVKGraphicsPipeline : public MVKPipeline {
 
@@ -259,6 +256,19 @@
 	/** Returns true if the tessellation control shader needs a buffer to store its per-patch output. */
 	bool needsTessCtlPatchOutputBuffer() { return _needsTessCtlPatchOutputBuffer; }
 
+	/** Returns whether this pipeline has custom sample positions enabled. */
+	bool isUsingCustomSamplePositions() { return _isUsingCustomSamplePositions; }
+
+	/**
+	 * Returns whether the MTLBuffer vertex shader buffer index is valid for a stage of this pipeline.
+	 * It is if it is a descriptor binding within the descriptor binding range,
+	 * or a vertex attribute binding above any implicit buffer bindings.
+	 */
+	bool isValidVertexBufferIndex(MVKShaderStage stage, uint32_t mtlBufferIndex);
+
+	/** Returns the custom samples used by this pipeline. */
+	MVKArrayRef<MTLSamplePosition> getCustomSamplePositions() { return _customSamplePositions.contents(); }
+
 	/** Returns the Metal vertex buffer index to use for the specified vertex attribute binding number.  */
 	uint32_t getMetalBufferIndexForVertexAttributeBinding(uint32_t binding) { return _device->getMetalBufferIndexForVertexAttributeBinding(binding); }
 
@@ -287,8 +297,10 @@
 
     id<MTLRenderPipelineState> getOrCompilePipeline(MTLRenderPipelineDescriptor* plDesc, id<MTLRenderPipelineState>& plState);
     id<MTLComputePipelineState> getOrCompilePipeline(MTLComputePipelineDescriptor* plDesc, id<MTLComputePipelineState>& plState, const char* compilerType);
+	void initCustomSamplePositions(const VkGraphicsPipelineCreateInfo* pCreateInfo);
     void initMTLRenderPipelineState(const VkGraphicsPipelineCreateInfo* pCreateInfo, const SPIRVTessReflectionData& reflectData);
     void initShaderConversionConfig(SPIRVToMSLConversionConfiguration& shaderConfig, const VkGraphicsPipelineCreateInfo* pCreateInfo, const SPIRVTessReflectionData& reflectData);
+	void initReservedVertexAttributeBufferCount(const VkGraphicsPipelineCreateInfo* pCreateInfo);
     void addVertexInputToShaderConversionConfig(SPIRVToMSLConversionConfiguration& shaderConfig, const VkGraphicsPipelineCreateInfo* pCreateInfo);
     void addPrevStageOutputToShaderConversionConfig(SPIRVToMSLConversionConfiguration& shaderConfig, SPIRVShaderOutputs& outputs);
     MTLRenderPipelineDescriptor* newMTLRenderPipelineDescriptor(const VkGraphicsPipelineCreateInfo* pCreateInfo, const SPIRVTessReflectionData& reflectData);
@@ -309,8 +321,7 @@
     bool isRasterizationDisabled(const VkGraphicsPipelineCreateInfo* pCreateInfo);
 	bool verifyImplicitBuffer(bool needsBuffer, MVKShaderImplicitRezBinding& index, MVKShaderStage stage, const char* name);
 	uint32_t getTranslatedVertexBinding(uint32_t binding, uint32_t translationOffset, uint32_t maxBinding);
-	uint32_t getImplicitBufferIndex(const VkGraphicsPipelineCreateInfo* pCreateInfo, MVKShaderStage stage, uint32_t bufferIndexOffset);
-	uint32_t getReservedBufferCount(const VkGraphicsPipelineCreateInfo* pCreateInfo, MVKShaderStage stage);
+	uint32_t getImplicitBufferIndex(MVKShaderStage stage, uint32_t bufferIndexOffset);
 
 	const VkPipelineShaderStageCreateInfo* _pVertexSS = nullptr;
 	const VkPipelineShaderStageCreateInfo* _pTessCtlSS = nullptr;
@@ -323,6 +334,8 @@
 
 	MVKSmallVector<VkViewport, kMVKCachedViewportScissorCount> _viewports;
 	MVKSmallVector<VkRect2D, kMVKCachedViewportScissorCount> _scissors;
+	MVKSmallVector<VkDynamicState> _dynamicState;
+	MVKSmallVector<MTLSamplePosition> _customSamplePositions;
 	MVKSmallVector<MVKTranslatedVertexBinding> _translatedVertexBindings;
 	MVKSmallVector<MVKZeroDivisorVertexBinding> _zeroDivisorVertexBindings;
 	MVKSmallVector<MVKStagedMTLArgumentEncoders> _mtlArgumentEncoders;
@@ -345,12 +358,12 @@
 
     float _blendConstants[4] = { 0.0, 0.0, 0.0, 1.0 };
     uint32_t _outputControlPointCount;
+	MVKShaderImplicitRezBinding _reservedVertexAttributeBufferCount;
 	MVKShaderImplicitRezBinding _viewRangeBufferIndex;
 	MVKShaderImplicitRezBinding _outputBufferIndex;
 	uint32_t _tessCtlPatchOutputBufferIndex = 0;
 	uint32_t _tessCtlLevelBufferIndex = 0;
 
-	bool _dynamicStateEnabled[kMVKVkDynamicStateCount];
 	bool _needsVertexSwizzleBuffer = false;
 	bool _needsVertexBufferSizeBuffer = false;
 	bool _needsVertexDynamicOffsetBuffer = false;
@@ -372,6 +385,7 @@
 	bool _isRasterizing = false;
 	bool _isRasterizingColor = false;
 	bool _isRasterizingDepthStencil = false;
+	bool _isUsingCustomSamplePositions = false;
 };
 
 
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKPipeline.mm b/MoltenVK/MoltenVK/GPUObjects/MVKPipeline.mm
index bf545e3..f801d3f 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKPipeline.mm
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKPipeline.mm
@@ -175,8 +175,9 @@
 
 void MVKPipeline::bindPushConstants(MVKCommandEncoder* cmdEncoder) {
 	for (uint32_t stage = kMVKShaderStageVertex; stage < kMVKShaderStageCount; stage++) {
-		if (cmdEncoder && _stageUsesPushConstants[stage]) {
-			cmdEncoder->getPushConstants(mvkVkShaderStageFlagBitsFromMVKShaderStage(MVKShaderStage(stage)))->setMTLBufferIndex(_pushConstantsBufferIndex.stages[stage]);
+		if (cmdEncoder) {
+			auto* pcState = cmdEncoder->getPushConstants(mvkVkShaderStageFlagBitsFromMVKShaderStage(MVKShaderStage(stage)));
+			pcState->setMTLBufferIndex(_pushConstantsBufferIndex.stages[stage], _stageUsesPushConstants[stage]);
 		}
 	}
 }
@@ -309,17 +310,18 @@
 }
 
 bool MVKGraphicsPipeline::supportsDynamicState(VkDynamicState state) {
-
-    // First test if this dynamic state is explicitly turned off
-    if ( (state >= kMVKVkDynamicStateCount) || !_dynamicStateEnabled[state] ) { return false; }
-
-    // Some dynamic states have other restrictions
-    switch (state) {
-        case VK_DYNAMIC_STATE_DEPTH_BIAS:
-            return _rasterInfo.depthBiasEnable;
-        default:
-            return true;
-    }
+	for (auto& ds : _dynamicState) {
+		if (state == ds) {
+			// Some dynamic states have other restrictions
+			switch (state) {
+				case VK_DYNAMIC_STATE_DEPTH_BIAS:
+					return _rasterInfo.depthBiasEnable;
+				default:
+					return true;
+			}
+		}
+	}
+	return false;
 }
 
 static const char vtxCompilerType[] = "Vertex stage pipeline for tessellation";
@@ -365,6 +367,20 @@
 
 #pragma mark Construction
 
+// Extracts and returns a VkPipelineRenderingCreateInfo from the renderPass or pNext chain of pCreateInfo, or returns null if not found
+static const VkPipelineRenderingCreateInfo* getRenderingCreateInfo(const VkGraphicsPipelineCreateInfo* pCreateInfo) {
+	if (pCreateInfo->renderPass) {
+		return ((MVKRenderPass*)pCreateInfo->renderPass)->getSubpass(pCreateInfo->subpass)->getPipelineRenderingCreateInfo();
+	}
+	for (const auto* next = (VkBaseInStructure*)pCreateInfo->pNext; next; next = next->pNext) {
+		switch (next->sType) {
+			case VK_STRUCTURE_TYPE_PIPELINE_RENDERING_CREATE_INFO: return (VkPipelineRenderingCreateInfo*)next;
+			default: break;
+		}
+	}
+	return nullptr;
+}
+
 MVKGraphicsPipeline::MVKGraphicsPipeline(MVKDevice* device,
 										 MVKPipelineCache* pipelineCache,
 										 MVKPipeline* parent,
@@ -372,11 +388,10 @@
 	MVKPipeline(device, pipelineCache, (MVKPipelineLayout*)pCreateInfo->layout, parent) {
 
 	// Determine rasterization early, as various other structs are validated and interpreted in this context.
-	MVKRenderPass* mvkRendPass = (MVKRenderPass*)pCreateInfo->renderPass;
-	MVKRenderSubpass* mvkRenderSubpass = mvkRendPass->getSubpass(pCreateInfo->subpass);
+	const VkPipelineRenderingCreateInfo* pRendInfo = getRenderingCreateInfo(pCreateInfo);
 	_isRasterizing = !isRasterizationDisabled(pCreateInfo);
-	_isRasterizingColor = _isRasterizing && mvkRenderSubpass->hasColorAttachments();
-	_isRasterizingDepthStencil = _isRasterizing && mvkRenderSubpass->hasDepthStencilAttachment();
+	_isRasterizingColor = _isRasterizing && mvkHasColorAttachments(pRendInfo);
+	_isRasterizingDepthStencil = _isRasterizing && mvkGetDepthStencilFormat(pRendInfo) != VK_FORMAT_UNDEFINED;
 
 	// Get the tessellation shaders, if present. Do this now, because we need to extract
 	// reflection data from them that informs everything else.
@@ -408,13 +423,11 @@
 		}
 	}
 
-	// Track dynamic state in _dynamicStateEnabled array
-	mvkClear(_dynamicStateEnabled, kMVKVkDynamicStateCount);	// start with all dynamic state disabled
+	// Track dynamic state
 	const VkPipelineDynamicStateCreateInfo* pDS = pCreateInfo->pDynamicState;
 	if (pDS) {
 		for (uint32_t i = 0; i < pDS->dynamicStateCount; i++) {
-			VkDynamicState ds = pDS->pDynamicStates[i];
-			_dynamicStateEnabled[ds] = true;
+			_dynamicState.push_back(pDS->pDynamicStates[i]);
 		}
 	}
 
@@ -457,6 +470,9 @@
 		}
 	}
 
+	// Must run after _isRasterizing and _dynamicState are populated
+	initCustomSamplePositions(pCreateInfo);
+
 	// Render pipeline state
 	initMTLRenderPipelineState(pCreateInfo, reflectData);
 
@@ -472,7 +488,7 @@
 		for (uint32_t vpIdx = 0; vpIdx < vpCnt; vpIdx++) {
 			// If viewport is dyanamic, we still add a dummy so that the count will be tracked.
 			VkViewport vp;
-			if ( !_dynamicStateEnabled[VK_DYNAMIC_STATE_VIEWPORT] ) { vp = pVPState->pViewports[vpIdx]; }
+			if ( !supportsDynamicState(VK_DYNAMIC_STATE_VIEWPORT) ) { vp = pVPState->pViewports[vpIdx]; }
 			_viewports.push_back(vp);
 		}
 
@@ -481,7 +497,7 @@
 		for (uint32_t sIdx = 0; sIdx < sCnt; sIdx++) {
 			// If scissor is dyanamic, we still add a dummy so that the count will be tracked.
 			VkRect2D sc;
-			if ( !_dynamicStateEnabled[VK_DYNAMIC_STATE_SCISSOR] ) { sc = pVPState->pScissors[sIdx]; }
+			if ( !supportsDynamicState(VK_DYNAMIC_STATE_SCISSOR) ) { sc = pVPState->pScissors[sIdx]; }
 			_scissors.push_back(sc);
 		}
 	}
@@ -512,6 +528,31 @@
 	return plState;
 }
 
+// Must run after _isRasterizing and _dynamicState are populated
+void MVKGraphicsPipeline::initCustomSamplePositions(const VkGraphicsPipelineCreateInfo* pCreateInfo) {
+
+	// Must ignore allowed bad pMultisampleState pointer if rasterization disabled
+	if ( !(_isRasterizing && pCreateInfo->pMultisampleState) ) { return; }
+
+	for (const auto* next = (VkBaseInStructure*)pCreateInfo->pMultisampleState->pNext; next; next = next->pNext) {
+		switch (next->sType) {
+			case VK_STRUCTURE_TYPE_PIPELINE_SAMPLE_LOCATIONS_STATE_CREATE_INFO_EXT: {
+				auto* pSampLocnsCreateInfo = (VkPipelineSampleLocationsStateCreateInfoEXT*)next;
+				_isUsingCustomSamplePositions = pSampLocnsCreateInfo->sampleLocationsEnable;
+				if (_isUsingCustomSamplePositions && !supportsDynamicState(VK_DYNAMIC_STATE_SAMPLE_LOCATIONS_EXT)) {
+					for (uint32_t slIdx = 0; slIdx < pSampLocnsCreateInfo->sampleLocationsInfo.sampleLocationsCount; slIdx++) {
+						auto& sl = pSampLocnsCreateInfo->sampleLocationsInfo.pSampleLocations[slIdx];
+						_customSamplePositions.push_back(MTLSamplePositionMake(sl.x, sl.y));
+					}
+				}
+				break;
+			}
+			default:
+				break;
+		}
+	}
+}
+
 // Constructs the underlying Metal render pipeline.
 void MVKGraphicsPipeline::initMTLRenderPipelineState(const VkGraphicsPipelineCreateInfo* pCreateInfo, const SPIRVTessReflectionData& reflectData) {
 	_mtlTessVertexStageState = nil;
@@ -528,9 +569,8 @@
 	if (!isTessellationPipeline()) {
 		MTLRenderPipelineDescriptor* plDesc = newMTLRenderPipelineDescriptor(pCreateInfo, reflectData);	// temp retain
 		if (plDesc) {
-			MVKRenderPass* mvkRendPass = (MVKRenderPass*)pCreateInfo->renderPass;
-			MVKRenderSubpass* mvkSubpass = mvkRendPass->getSubpass(pCreateInfo->subpass);
-			if (mvkSubpass->isMultiview()) {
+			const VkPipelineRenderingCreateInfo* pRendInfo = getRenderingCreateInfo(pCreateInfo);
+			if (pRendInfo && mvkIsMultiview(pRendInfo->viewMask)) {
 				// We need to adjust the step rate for per-instance attributes to account for the
 				// extra instances needed to render all views. But, there's a problem: vertex input
 				// descriptions are static pipeline state. If we need multiple passes, and some have
@@ -538,8 +578,8 @@
 				// for these passes. We'll need to make a pipeline for every pass view count we can see
 				// in the render pass. This really sucks.
 				std::unordered_set<uint32_t> viewCounts;
-				for (uint32_t passIdx = 0; passIdx < mvkSubpass->getMultiviewMetalPassCount(); ++passIdx) {
-					viewCounts.insert(mvkSubpass->getViewCountInMetalPass(passIdx));
+				for (uint32_t passIdx = 0; passIdx < getDevice()->getMultiviewMetalPassCount(pRendInfo->viewMask); ++passIdx) {
+					viewCounts.insert(getDevice()->getViewCountInMetalPass(pRendInfo->viewMask, passIdx));
 				}
 				auto count = viewCounts.cbegin();
 				adjustVertexInputForMultiview(plDesc.vertexDescriptor, pCreateInfo->pVertexInputState, *count);
@@ -829,7 +869,7 @@
 				}
 				innerLoc = location;
 			}
-			plDesc.vertexDescriptor.attributes[location].bufferIndex = kMVKTessEvalLevelBufferIndex;
+			plDesc.vertexDescriptor.attributes[location].bufferIndex = getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalLevelBufferBinding);
 			if (reflectData.patchKind == spv::ExecutionModeTriangles || output.builtin == spv::BuiltInTessLevelOuter) {
 				plDesc.vertexDescriptor.attributes[location].offset = 0;
 				plDesc.vertexDescriptor.attributes[location].format = MTLVertexFormatHalf4;	// FIXME Should use Float4
@@ -839,7 +879,7 @@
 			}
 		} else if (output.perPatch) {
 			patchOffset = (uint32_t)mvkAlignByteCount(patchOffset, getShaderOutputAlignment(output));
-			plDesc.vertexDescriptor.attributes[output.location].bufferIndex = kMVKTessEvalPatchInputBufferIndex;
+			plDesc.vertexDescriptor.attributes[output.location].bufferIndex = getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalPatchInputBufferBinding);
 			plDesc.vertexDescriptor.attributes[output.location].format = getPixelFormats()->getMTLVertexFormat(mvkFormatFromOutput(output));
 			plDesc.vertexDescriptor.attributes[output.location].offset = patchOffset;
 			patchOffset += getShaderOutputSize(output);
@@ -847,7 +887,7 @@
 			usedPerPatch = true;
 		} else {
 			offset = (uint32_t)mvkAlignByteCount(offset, getShaderOutputAlignment(output));
-			plDesc.vertexDescriptor.attributes[output.location].bufferIndex = kMVKTessEvalInputBufferIndex;
+			plDesc.vertexDescriptor.attributes[output.location].bufferIndex = getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalInputBufferBinding);
 			plDesc.vertexDescriptor.attributes[output.location].format = getPixelFormats()->getMTLVertexFormat(mvkFormatFromOutput(output));
 			plDesc.vertexDescriptor.attributes[output.location].offset = offset;
 			offset += getShaderOutputSize(output);
@@ -856,16 +896,19 @@
 		}
 	}
 	if (usedPerVertex) {
-		plDesc.vertexDescriptor.layouts[kMVKTessEvalInputBufferIndex].stepFunction = MTLVertexStepFunctionPerPatchControlPoint;
-		plDesc.vertexDescriptor.layouts[kMVKTessEvalInputBufferIndex].stride = mvkAlignByteCount(offset, getShaderOutputAlignment(*firstVertex));
+		uint32_t mtlVBIdx = getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalInputBufferBinding);
+		plDesc.vertexDescriptor.layouts[mtlVBIdx].stepFunction = MTLVertexStepFunctionPerPatchControlPoint;
+		plDesc.vertexDescriptor.layouts[mtlVBIdx].stride = mvkAlignByteCount(offset, getShaderOutputAlignment(*firstVertex));
 	}
 	if (usedPerPatch) {
-		plDesc.vertexDescriptor.layouts[kMVKTessEvalPatchInputBufferIndex].stepFunction = MTLVertexStepFunctionPerPatch;
-		plDesc.vertexDescriptor.layouts[kMVKTessEvalPatchInputBufferIndex].stride = mvkAlignByteCount(patchOffset, getShaderOutputAlignment(*firstPatch));
+		uint32_t mtlVBIdx = getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalPatchInputBufferBinding);
+		plDesc.vertexDescriptor.layouts[mtlVBIdx].stepFunction = MTLVertexStepFunctionPerPatch;
+		plDesc.vertexDescriptor.layouts[mtlVBIdx].stride = mvkAlignByteCount(patchOffset, getShaderOutputAlignment(*firstPatch));
 	}
 	if (outerLoc != (uint32_t)(-1) || innerLoc != (uint32_t)(-1)) {
-		plDesc.vertexDescriptor.layouts[kMVKTessEvalLevelBufferIndex].stepFunction = MTLVertexStepFunctionPerPatch;
-		plDesc.vertexDescriptor.layouts[kMVKTessEvalLevelBufferIndex].stride =
+		uint32_t mtlVBIdx = getMetalBufferIndexForVertexAttributeBinding(kMVKTessEvalLevelBufferBinding);
+		plDesc.vertexDescriptor.layouts[mtlVBIdx].stepFunction = MTLVertexStepFunctionPerPatch;
+		plDesc.vertexDescriptor.layouts[mtlVBIdx].stride =
 			reflectData.patchKind == spv::ExecutionModeTriangles ? sizeof(MTLTriangleTessellationFactorsHalf) :
 																   sizeof(MTLQuadTessellationFactorsHalf);
 	}
@@ -1034,7 +1077,7 @@
 	shaderConfig.options.entryPointName = _pTessCtlSS->pName;
 	shaderConfig.options.mslOptions.swizzle_buffer_index = _swizzleBufferIndex.stages[kMVKShaderStageTessCtl];
 	shaderConfig.options.mslOptions.indirect_params_buffer_index = _indirectParamsIndex.stages[kMVKShaderStageTessCtl];
-	shaderConfig.options.mslOptions.shader_input_buffer_index = kMVKTessCtlInputBufferIndex;
+	shaderConfig.options.mslOptions.shader_input_buffer_index = getMetalBufferIndexForVertexAttributeBinding(kMVKTessCtlInputBufferBinding);
 	shaderConfig.options.mslOptions.shader_output_buffer_index = _outputBufferIndex.stages[kMVKShaderStageTessCtl];
 	shaderConfig.options.mslOptions.shader_patch_output_buffer_index = _tessCtlPatchOutputBufferIndex;
 	shaderConfig.options.mslOptions.shader_tess_factor_buffer_index = _tessCtlLevelBufferIndex;
@@ -1423,11 +1466,6 @@
 
 void MVKGraphicsPipeline::addFragmentOutputToPipeline(MTLRenderPipelineDescriptor* plDesc,
 													  const VkGraphicsPipelineCreateInfo* pCreateInfo) {
-
-    // Retrieve the render subpass for which this pipeline is being constructed
-    MVKRenderPass* mvkRendPass = (MVKRenderPass*)pCreateInfo->renderPass;
-    MVKRenderSubpass* mvkRenderSubpass = mvkRendPass->getSubpass(pCreateInfo->subpass);
-
 	// Topology
 	if (pCreateInfo->pInputAssemblyState) {
 		plDesc.inputPrimitiveTopologyMVK = isRenderingPoints(pCreateInfo)
@@ -1435,14 +1473,17 @@
 												: mvkMTLPrimitiveTopologyClassFromVkPrimitiveTopology(pCreateInfo->pInputAssemblyState->topology);
 	}
 
-    // Color attachments - must ignore bad pColorBlendState pointer if rasterization is disabled or subpass has no color attachments
+	const VkPipelineRenderingCreateInfo* pRendInfo = getRenderingCreateInfo(pCreateInfo);
+
+	// Color attachments - must ignore bad pColorBlendState pointer if rasterization is disabled or subpass has no color attachments
     uint32_t caCnt = 0;
-    if (_isRasterizingColor && pCreateInfo->pColorBlendState) {
+    if (_isRasterizingColor && pRendInfo && pCreateInfo->pColorBlendState) {
         for (uint32_t caIdx = 0; caIdx < pCreateInfo->pColorBlendState->attachmentCount; caIdx++) {
             const VkPipelineColorBlendAttachmentState* pCA = &pCreateInfo->pColorBlendState->pAttachments[caIdx];
 
-            MTLRenderPipelineColorAttachmentDescriptor* colorDesc = plDesc.colorAttachments[caIdx];
-            colorDesc.pixelFormat = getPixelFormats()->getMTLPixelFormat(mvkRenderSubpass->getColorAttachmentFormat(caIdx));
+			MTLPixelFormat mtlPixFmt = getPixelFormats()->getMTLPixelFormat(pRendInfo->pColorAttachmentFormats[caIdx]);
+			MTLRenderPipelineColorAttachmentDescriptor* colorDesc = plDesc.colorAttachments[caIdx];
+            colorDesc.pixelFormat = mtlPixFmt;
             if (colorDesc.pixelFormat == MTLPixelFormatRGB9E5Float) {
                 // Metal doesn't allow disabling individual channels for a RGB9E5 render target.
                 // Either all must be disabled or none must be disabled.
@@ -1455,7 +1496,7 @@
             // Don't set the blend state if we're not using this attachment.
             // The pixel format will be MTLPixelFormatInvalid in that case, and
             // Metal asserts if we turn on blending with that pixel format.
-            if (mvkRenderSubpass->isColorAttachmentUsed(caIdx)) {
+            if (mtlPixFmt) {
                 caCnt++;
                 colorDesc.blendingEnabled = pCA->blendEnable;
                 colorDesc.rgbBlendOperation = mvkMTLBlendOperationFromVkBlendOp(pCA->colorBlendOp);
@@ -1470,7 +1511,7 @@
 
     // Depth & stencil attachments
 	MVKPixelFormats* pixFmts = getPixelFormats();
-    MTLPixelFormat mtlDSFormat = pixFmts->getMTLPixelFormat(mvkRenderSubpass->getDepthStencilFormat());
+    MTLPixelFormat mtlDSFormat = pixFmts->getMTLPixelFormat(mvkGetDepthStencilFormat(pRendInfo));
     if (pixFmts->isDepthFormat(mtlDSFormat)) { plDesc.depthAttachmentPixelFormat = mtlDSFormat; }
     if (pixFmts->isStencilFormat(mtlDSFormat)) { plDesc.stencilAttachmentPixelFormat = mtlDSFormat; }
 
@@ -1487,9 +1528,13 @@
     // Multisampling - must ignore allowed bad pMultisampleState pointer if rasterization disabled
     if (_isRasterizing && pCreateInfo->pMultisampleState) {
         plDesc.sampleCount = mvkSampleCountFromVkSampleCountFlagBits(pCreateInfo->pMultisampleState->rasterizationSamples);
-        mvkRenderSubpass->setDefaultSampleCount(pCreateInfo->pMultisampleState->rasterizationSamples);
         plDesc.alphaToCoverageEnabled = pCreateInfo->pMultisampleState->alphaToCoverageEnable;
         plDesc.alphaToOneEnabled = pCreateInfo->pMultisampleState->alphaToOneEnable;
+
+		// If the pipeline uses a specific render subpass, set its default sample count
+		if (pCreateInfo->renderPass) {
+			((MVKRenderPass*)pCreateInfo->renderPass)->getSubpass(pCreateInfo->subpass)->setDefaultSampleCount(pCreateInfo->pMultisampleState->rasterizationSamples);
+		}
     }
 }
 
@@ -1529,16 +1574,17 @@
 	// FIXME: Many of these are optional. We shouldn't set the ones that aren't
 	// present--or at least, we should move the ones that are down to avoid running over
 	// the limit of available buffers. But we can't know that until we compile the shaders.
+	initReservedVertexAttributeBufferCount(pCreateInfo);
 	for (uint32_t i = kMVKShaderStageVertex; i < kMVKShaderStageCount; i++) {
 		MVKShaderStage stage = (MVKShaderStage)i;
-		_dynamicOffsetBufferIndex.stages[stage] = getImplicitBufferIndex(pCreateInfo, stage, 0);
-		_bufferSizeBufferIndex.stages[stage] = getImplicitBufferIndex(pCreateInfo, stage, 1);
-		_swizzleBufferIndex.stages[stage] = getImplicitBufferIndex(pCreateInfo, stage, 2);
-		_indirectParamsIndex.stages[stage] = getImplicitBufferIndex(pCreateInfo, stage, 3);
-		_outputBufferIndex.stages[stage] = getImplicitBufferIndex(pCreateInfo, stage, 4);
+		_dynamicOffsetBufferIndex.stages[stage] = getImplicitBufferIndex(stage, 0);
+		_bufferSizeBufferIndex.stages[stage] = getImplicitBufferIndex(stage, 1);
+		_swizzleBufferIndex.stages[stage] = getImplicitBufferIndex(stage, 2);
+		_indirectParamsIndex.stages[stage] = getImplicitBufferIndex(stage, 3);
+		_outputBufferIndex.stages[stage] = getImplicitBufferIndex(stage, 4);
 		if (stage == kMVKShaderStageTessCtl) {
-			_tessCtlPatchOutputBufferIndex = getImplicitBufferIndex(pCreateInfo, stage, 5);
-			_tessCtlLevelBufferIndex = getImplicitBufferIndex(pCreateInfo, stage, 6);
+			_tessCtlPatchOutputBufferIndex = getImplicitBufferIndex(stage, 5);
+			_tessCtlLevelBufferIndex = getImplicitBufferIndex(stage, 6);
 		}
 	}
 	// Since we currently can't use multiview with tessellation or geometry shaders,
@@ -1546,10 +1592,9 @@
 	// view range buffer as for the indirect paramters buffer.
 	_viewRangeBufferIndex = _indirectParamsIndex;
 
-    MVKRenderPass* mvkRendPass = (MVKRenderPass*)pCreateInfo->renderPass;
-    MVKRenderSubpass* mvkRenderSubpass = mvkRendPass->getSubpass(pCreateInfo->subpass);
+	const VkPipelineRenderingCreateInfo* pRendInfo = getRenderingCreateInfo(pCreateInfo);
 	MVKPixelFormats* pixFmts = getPixelFormats();
-    MTLPixelFormat mtlDSFormat = pixFmts->getMTLPixelFormat(mvkRenderSubpass->getDepthStencilFormat());
+    MTLPixelFormat mtlDSFormat = pixFmts->getMTLPixelFormat(mvkGetDepthStencilFormat(pRendInfo));
 
 	// Disable any unused color attachments, because Metal validation can complain if the
 	// fragment shader outputs a color value without a corresponding color attachment.
@@ -1560,7 +1605,7 @@
 	shaderConfig.options.mslOptions.enable_frag_output_mask = hasA2C ? 1 : 0;
 	if (_isRasterizingColor && pCreateInfo->pColorBlendState) {
 		for (uint32_t caIdx = 0; caIdx < pCreateInfo->pColorBlendState->attachmentCount; caIdx++) {
-			if (mvkRenderSubpass->isColorAttachmentUsed(caIdx)) {
+			if (mvkIsColorAttachmentUsed(pRendInfo, caIdx)) {
 				mvkEnableFlags(shaderConfig.options.mslOptions.enable_frag_output_mask, 1 << caIdx);
 			}
 		}
@@ -1574,7 +1619,7 @@
     shaderConfig.options.shouldFlipVertexY = mvkConfig().shaderConversionFlipVertexY;
     shaderConfig.options.mslOptions.swizzle_texture_samples = _fullImageViewSwizzle && !getDevice()->_pMetalFeatures->nativeTextureSwizzle;
     shaderConfig.options.mslOptions.tess_domain_origin_lower_left = pTessDomainOriginState && pTessDomainOriginState->domainOrigin == VK_TESSELLATION_DOMAIN_ORIGIN_LOWER_LEFT;
-    shaderConfig.options.mslOptions.multiview = mvkRendPass->isMultiview();
+    shaderConfig.options.mslOptions.multiview = mvkIsMultiview(pRendInfo->viewMask);
     shaderConfig.options.mslOptions.multiview_layered_rendering = getPhysicalDevice()->canUseInstancingForMultiview();
     shaderConfig.options.mslOptions.view_index_from_device_index = mvkAreAllFlagsEnabled(pCreateInfo->flags, VK_PIPELINE_CREATE_VIEW_INDEX_FROM_DEVICE_INDEX_BIT);
 #if MVK_MACOS
@@ -1589,17 +1634,48 @@
     shaderConfig.options.numTessControlPoints = reflectData.numControlPoints;
 }
 
-uint32_t MVKGraphicsPipeline::getImplicitBufferIndex(const VkGraphicsPipelineCreateInfo* pCreateInfo, MVKShaderStage stage, uint32_t bufferIndexOffset) {
-	return _device->_pMetalFeatures->maxPerStageBufferCount - (getReservedBufferCount(pCreateInfo, stage) + bufferIndexOffset + 1);
+uint32_t MVKGraphicsPipeline::getImplicitBufferIndex(MVKShaderStage stage, uint32_t bufferIndexOffset) {
+	return getMetalBufferIndexForVertexAttributeBinding(_reservedVertexAttributeBufferCount.stages[stage] + bufferIndexOffset);
 }
 
-uint32_t MVKGraphicsPipeline::getReservedBufferCount(const VkGraphicsPipelineCreateInfo* pCreateInfo, MVKShaderStage stage) {
-	switch (stage) {
-		case kMVKShaderStageVertex:		return pCreateInfo->pVertexInputState->vertexBindingDescriptionCount;
-		case kMVKShaderStageTessCtl:	return kMVKTessCtlNumReservedBuffers;
-		case kMVKShaderStageTessEval:	return kMVKTessEvalNumReservedBuffers;
-		default:						return 0;
+// Set the number of vertex attribute buffers consumed by this pipeline at each stage.
+// Any implicit buffers needed by this pipeline will be assigned indexes below the range
+// defined by this count below the max number of Metal buffer bindings per stage.
+// Must be called before any calls to getImplicitBufferIndex().
+void MVKGraphicsPipeline::initReservedVertexAttributeBufferCount(const VkGraphicsPipelineCreateInfo* pCreateInfo) {
+	int32_t maxBinding = -1;
+	uint32_t xltdBuffCnt = 0;
+
+	const VkPipelineVertexInputStateCreateInfo* pVI = pCreateInfo->pVertexInputState;
+	uint32_t vaCnt = pVI->vertexAttributeDescriptionCount;
+	uint32_t vbCnt = pVI->vertexBindingDescriptionCount;
+
+	// Determine the highest binding number used by the vertex buffers
+	for (uint32_t vbIdx = 0; vbIdx < vbCnt; vbIdx++) {
+		const VkVertexInputBindingDescription* pVKVB = &pVI->pVertexBindingDescriptions[vbIdx];
+		maxBinding = max<int32_t>(pVKVB->binding, maxBinding);
+
+		// Iterate through the vertex attributes and determine if any need a synthetic binding buffer to
+		// accommodate offsets that are outside the stride, which Vulkan supports, but Metal does not.
+		// This value will be worst case, as some synthetic buffers may end up being shared.
+		for (uint32_t vaIdx = 0; vaIdx < vaCnt; vaIdx++) {
+			const VkVertexInputAttributeDescription* pVKVA = &pVI->pVertexAttributeDescriptions[vaIdx];
+			if ((pVKVA->binding == pVKVB->binding) && (pVKVA->offset + getPixelFormats()->getBytesPerBlock(pVKVA->format) > pVKVB->stride)) {
+				xltdBuffCnt++;
+			}
+		}
 	}
+
+	// The number of reserved bindings we need for the vertex stage is determined from the largest vertex
+	// attribute binding number, plus any synthetic buffer bindings created to support translated offsets.
+	mvkClear<uint32_t>(_reservedVertexAttributeBufferCount.stages, kMVKShaderStageCount);
+	_reservedVertexAttributeBufferCount.stages[kMVKShaderStageVertex] = (maxBinding + 1) + xltdBuffCnt;
+	_reservedVertexAttributeBufferCount.stages[kMVKShaderStageTessCtl] = kMVKTessCtlNumReservedBuffers;
+	_reservedVertexAttributeBufferCount.stages[kMVKShaderStageTessEval] = kMVKTessEvalNumReservedBuffers;
+}
+
+bool MVKGraphicsPipeline::isValidVertexBufferIndex(MVKShaderStage stage, uint32_t mtlBufferIndex) {
+	return mtlBufferIndex < _descriptorBufferCounts.stages[stage] || mtlBufferIndex > getImplicitBufferIndex(stage, 0);
 }
 
 // Initializes the vertex attributes in a shader conversion configuration.
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKRenderPass.h b/MoltenVK/MoltenVK/GPUObjects/MVKRenderPass.h
index 81f1b6e..d440b2e 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKRenderPass.h
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKRenderPass.h
@@ -42,7 +42,6 @@
 
 public:
 
-
 	/** Returns the Vulkan API opaque object controlling this object. */
 	MVKVulkanAPIObject* getVulkanAPIObject() override;
 
@@ -83,10 +82,10 @@
 	void setDefaultSampleCount(VkSampleCountFlagBits count) { _defaultSampleCount = count; }
 
 	/** Returns whether or not this is a multiview subpass. */
-	bool isMultiview() const { return _viewMask != 0; }
+	bool isMultiview() const { return _pipelineRenderingCreateInfo.viewMask != 0; }
 
 	/** Returns the total number of views to be rendered. */
-	uint32_t getViewCount() const { return __builtin_popcount(_viewMask); }
+	uint32_t getViewCount() const { return __builtin_popcount(_pipelineRenderingCreateInfo.viewMask); }
 
 	/** Returns the number of Metal render passes needed to render all views. */
 	uint32_t getMultiviewMetalPassCount() const;
@@ -100,6 +99,9 @@
 	/** Returns the number of views to be rendered in all multiview passes up to the given one. */
 	uint32_t getViewCountUpToMetalPass(uint32_t passIdx) const;
 
+	/** Returns pipeline rendering create info that describes this subpass. */
+	const VkPipelineRenderingCreateInfo* getPipelineRenderingCreateInfo() { return &_pipelineRenderingCreateInfo; }
+
 	/** 
 	 * Populates the specified Metal MTLRenderPassDescriptor with content from this
 	 * instance, the specified framebuffer, and the specified array of clear values
@@ -151,19 +153,21 @@
 
 	uint32_t getViewMaskGroupForMetalPass(uint32_t passIdx);
 	MVKMTLFmtCaps getRequiredFormatCapabilitiesForAttachmentAt(uint32_t rpAttIdx);
+	void populatePipelineRenderingCreateInfo();
 
 	MVKRenderPass* _renderPass;
-	uint32_t _subpassIndex;
-	uint32_t _viewMask;
 	MVKSmallVector<VkAttachmentReference2, kMVKDefaultAttachmentCount> _inputAttachments;
 	MVKSmallVector<VkAttachmentReference2, kMVKDefaultAttachmentCount> _colorAttachments;
 	MVKSmallVector<VkAttachmentReference2, kMVKDefaultAttachmentCount> _resolveAttachments;
 	MVKSmallVector<uint32_t, kMVKDefaultAttachmentCount> _preserveAttachments;
+	MVKSmallVector<VkFormat, kMVKDefaultAttachmentCount> _colorAttachmentFormats;
+	VkPipelineRenderingCreateInfo _pipelineRenderingCreateInfo;
 	VkAttachmentReference2 _depthStencilAttachment;
 	VkAttachmentReference2 _depthStencilResolveAttachment;
 	VkResolveModeFlagBits _depthResolveMode = VK_RESOLVE_MODE_NONE;
 	VkResolveModeFlagBits _stencilResolveMode = VK_RESOLVE_MODE_NONE;
 	VkSampleCountFlagBits _defaultSampleCount = VK_SAMPLE_COUNT_1_BIT;
+	uint32_t _subpassIndex;
 };
 
 
@@ -214,11 +218,9 @@
     /** Returns whether this attachment should be cleared in the subpass. */
     bool shouldClearAttachment(MVKRenderSubpass* subpass, bool isStencil);
 
-	/** Constructs an instance for the specified parent renderpass. */
 	MVKRenderPassAttachment(MVKRenderPass* renderPass,
 							const VkAttachmentDescription* pCreateInfo);
 
-	/** Constructs an instance for the specified parent renderpass. */
 	MVKRenderPassAttachment(MVKRenderPass* renderPass,
 							const VkAttachmentDescription2* pCreateInfo);
 
@@ -261,16 +263,23 @@
     /** Returns the granularity of the render area of this instance.  */
     VkExtent2D getRenderAreaGranularity();
 
-	/** Returns the format of the color attachment at the specified index. */
-	MVKRenderSubpass* getSubpass(uint32_t subpassIndex);
+	/** Returns the number of subpasses. */
+	size_t getSubpassCount() { return _subpasses.size(); }
+
+	/** Returns the subpass at the specified index. */
+	MVKRenderSubpass* getSubpass(uint32_t subpassIndex) { return &_subpasses[subpassIndex]; }
 
 	/** Returns whether or not this render pass is a multiview render pass. */
 	bool isMultiview() const;
 
-	/** Constructs an instance for the specified device. */
+	/** Returns the dynamic rendering flags. */
+	VkRenderingFlags getRenderingFlags() { return _renderingFlags; }
+
+	/** Sets the dynamic rendering flags. */
+	void setRenderingFlags(VkRenderingFlags renderingFlags) { _renderingFlags = renderingFlags; }
+
 	MVKRenderPass(MVKDevice* device, const VkRenderPassCreateInfo* pCreateInfo);
 
-	/** Constructs an instance for the specified device. */
 	MVKRenderPass(MVKDevice* device, const VkRenderPassCreateInfo2* pCreateInfo);
 
 protected:
@@ -282,6 +291,44 @@
 	MVKSmallVector<MVKRenderPassAttachment> _attachments;
 	MVKSmallVector<MVKRenderSubpass> _subpasses;
 	MVKSmallVector<VkSubpassDependency2> _subpassDependencies;
+	VkRenderingFlags _renderingFlags = 0;
 
 };
 
+
+#pragma mark -
+#pragma mark Support functions
+
+/** Returns a MVKRenderPass object created from the rendering info. */
+MVKRenderPass* mvkCreateRenderPass(MVKDevice* device, const VkRenderingInfo* pRenderingInfo);
+
+/**
+ * Extracts the usable attachments and their clear values from the rendering info,
+ * and sets them in the corresponding arrays, which must be large enough to hold
+ * all of the extracted values, and returns the number of attachments extracted.
+ * For consistency, the clear value of any resolve attachments are populated,
+ * even though they are ignored.
+ */
+uint32_t mvkGetAttachments(const VkRenderingInfo* pRenderingInfo,
+						   MVKImageView* attachments[],
+						   VkClearValue clearValues[]);
+
+/** Returns whether the view mask uses multiview. */
+static inline bool mvkIsMultiview(uint32_t viewMask) { return viewMask != 0; }
+
+/** Returns whether the attachment is being used. */
+bool mvkIsColorAttachmentUsed(const VkPipelineRenderingCreateInfo* pRendInfo, uint32_t colorAttIdx);
+
+/** Returns whether any attachment is being used. */
+bool mvkHasColorAttachments(const VkPipelineRenderingCreateInfo* pRendInfo);
+
+/** Extracts and returns the combined depth/stencil format . */
+VkFormat mvkGetDepthStencilFormat(const VkPipelineRenderingCreateInfo* pRendInfo);
+
+/**
+ * Extracts the first view, number of views, and the portion of the mask
+ * to be rendered from the lowest clump of set bits in a view mask.
+ */
+uint32_t mvkGetNextViewMaskGroup(uint32_t viewMask, uint32_t* startView,
+								 uint32_t* viewCount, uint32_t *groupMask = nullptr);
+
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKRenderPass.mm b/MoltenVK/MoltenVK/GPUObjects/MVKRenderPass.mm
index 6d9c04a..6500c99 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKRenderPass.mm
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKRenderPass.mm
@@ -93,102 +93,42 @@
 	return VK_SAMPLE_COUNT_1_BIT;
 }
 
-// Extract the first view, number of views, and the portion of the mask to be rendered from
-// the lowest clump of set bits in a view mask.
-static uint32_t getNextViewMaskGroup(uint32_t viewMask, uint32_t* startView, uint32_t* viewCount, uint32_t *groupMask = nullptr) {
-	// First, find the first set bit. This is the start of the next clump of views to be rendered.
-	// n.b. ffs(3) returns a 1-based index. This actually bit me during development of this feature.
-	int pos = ffs(viewMask) - 1;
-	int end = pos;
-	if (groupMask) { *groupMask = 0; }
-	// Now we'll step through the bits one at a time until we find a bit that isn't set.
-	// This is one past the end of the next clump. Clear the bits as we go, so we can use
-	// ffs(3) again on the next clump.
-	// TODO: Find a way to make this faster.
-	while (viewMask & (1 << end)) {
-		if (groupMask) { *groupMask |= viewMask & (1 << end); }
-		viewMask &= ~(1 << (end++));
-	}
-	if (startView) { *startView = pos; }
-	if (viewCount) { *viewCount = end - pos; }
-	return viewMask;
-}
-
 // Get the portion of the view mask that will be rendered in the specified Metal render pass.
 uint32_t MVKRenderSubpass::getViewMaskGroupForMetalPass(uint32_t passIdx) {
-	if (!_viewMask) { return 0; }
+	if (!_pipelineRenderingCreateInfo.viewMask) { return 0; }
 	assert(passIdx < getMultiviewMetalPassCount());
 	if (!_renderPass->getPhysicalDevice()->canUseInstancingForMultiview()) {
 		return 1 << getFirstViewIndexInMetalPass(passIdx);
 	}
-	uint32_t mask = _viewMask, groupMask = 0;
+	uint32_t mask = _pipelineRenderingCreateInfo.viewMask, groupMask = 0;
 	for (uint32_t i = 0; i <= passIdx; ++i) {
-		mask = getNextViewMaskGroup(mask, nullptr, nullptr, &groupMask);
+		mask = mvkGetNextViewMaskGroup(mask, nullptr, nullptr, &groupMask);
 	}
 	return groupMask;
 }
 
 uint32_t MVKRenderSubpass::getMultiviewMetalPassCount() const {
-	if (!_viewMask) { return 0; }
-	if (!_renderPass->getPhysicalDevice()->canUseInstancingForMultiview()) {
-		// If we can't use instanced drawing for this, we'll have to unroll the render pass.
-		return __builtin_popcount(_viewMask);
-	}
-	uint32_t mask = _viewMask;
-	uint32_t count;
-	// Step through each clump until there are no more clumps. I'll know this has
-	// happened when the mask becomes 0, since getNextViewMaskGroup() clears each group of bits
-	// as it finds them, and returns the remainder of the mask.
-	for (count = 0; mask != 0; ++count) {
-		mask = getNextViewMaskGroup(mask, nullptr, nullptr);
-	}
-	return count;
+	return _renderPass->getDevice()->getMultiviewMetalPassCount(_pipelineRenderingCreateInfo.viewMask);
 }
 
 uint32_t MVKRenderSubpass::getFirstViewIndexInMetalPass(uint32_t passIdx) const {
-	if (!_viewMask) { return 0; }
-	assert(passIdx < getMultiviewMetalPassCount());
-	uint32_t mask = _viewMask;
-	uint32_t startView = 0, viewCount = 0;
-	if (!_renderPass->getPhysicalDevice()->canUseInstancingForMultiview()) {
-		for (uint32_t i = 0; mask != 0; ++i) {
-			mask = getNextViewMaskGroup(mask, &startView, &viewCount);
-			while (passIdx-- > 0 && viewCount-- > 0) {
-				startView++;
-			}
-		}
-	} else {
-		for (uint32_t i = 0; i <= passIdx; ++i) {
-			mask = getNextViewMaskGroup(mask, &startView, nullptr);
-		}
-	}
-	return startView;
+	return _renderPass->getDevice()->getFirstViewIndexInMetalPass(_pipelineRenderingCreateInfo.viewMask, passIdx);
 }
 
 uint32_t MVKRenderSubpass::getViewCountInMetalPass(uint32_t passIdx) const {
-	if (!_viewMask) { return 0; }
-	assert(passIdx < getMultiviewMetalPassCount());
-	if (!_renderPass->getPhysicalDevice()->canUseInstancingForMultiview()) {
-		return 1;
-	}
-	uint32_t mask = _viewMask;
-	uint32_t viewCount = 0;
-	for (uint32_t i = 0; i <= passIdx; ++i) {
-		mask = getNextViewMaskGroup(mask, nullptr, &viewCount);
-	}
-	return viewCount;
+	return _renderPass->getDevice()->getViewCountInMetalPass(_pipelineRenderingCreateInfo.viewMask, passIdx);
 }
 
 uint32_t MVKRenderSubpass::getViewCountUpToMetalPass(uint32_t passIdx) const {
-	if (!_viewMask) { return 0; }
+	if (!_pipelineRenderingCreateInfo.viewMask) { return 0; }
 	if (!_renderPass->getPhysicalDevice()->canUseInstancingForMultiview()) {
 		return passIdx+1;
 	}
-	uint32_t mask = _viewMask;
+	uint32_t mask = _pipelineRenderingCreateInfo.viewMask;
 	uint32_t totalViewCount = 0;
 	for (uint32_t i = 0; i <= passIdx; ++i) {
 		uint32_t viewCount;
-		mask = getNextViewMaskGroup(mask, nullptr, &viewCount);
+		mask = mvkGetNextViewMaskGroup(mask, nullptr, &viewCount);
 		totalViewCount += viewCount;
 	}
 	return totalViewCount;
@@ -314,9 +254,7 @@
 	// If Metal does not support rendering without attachments, create a dummy attachment to pass Metal validation.
 	if (caUsedCnt == 0 && dsRPAttIdx == VK_ATTACHMENT_UNUSED) {
         if (_renderPass->getDevice()->_pMetalFeatures->renderWithoutAttachments) {
-#if MVK_MACOS_OR_IOS
             mtlRPDesc.defaultRasterSampleCount = mvkSampleCountFromVkSampleCountFlagBits(_defaultSampleCount);
-#endif
 		} else {
 			MTLRenderPassColorAttachmentDescriptor* mtlColorAttDesc = mtlRPDesc.colorAttachments[0];
 			mtlColorAttDesc.texture = framebuffer->getDummyAttachmentMTLTexture(this, passIdx);
@@ -467,13 +405,32 @@
 	}
 }
 
+// Must be called after renderpass has both subpasses and attachments bound
+void MVKRenderSubpass::populatePipelineRenderingCreateInfo() {
+	MVKPixelFormats* pixFmts = _renderPass->getPixelFormats();
+	_pipelineRenderingCreateInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_RENDERING_CREATE_INFO;
+	_pipelineRenderingCreateInfo.pNext = nullptr;
+
+	uint32_t caCnt = getColorAttachmentCount();
+	for (uint32_t caIdx = 0; caIdx < caCnt; caIdx++) {
+		_colorAttachmentFormats.push_back(getColorAttachmentFormat(caIdx));
+	}
+	_pipelineRenderingCreateInfo.pColorAttachmentFormats = _colorAttachmentFormats.data();
+	_pipelineRenderingCreateInfo.colorAttachmentCount = caCnt;
+
+	VkFormat dsFmt = getDepthStencilFormat();
+	MTLPixelFormat dsMTLFmt = pixFmts->getMTLPixelFormat(dsFmt);
+	_pipelineRenderingCreateInfo.depthAttachmentFormat = pixFmts->isDepthFormat(dsMTLFmt) ? dsFmt : VK_FORMAT_UNDEFINED;
+	_pipelineRenderingCreateInfo.stencilAttachmentFormat = pixFmts->isStencilFormat(dsMTLFmt) ? dsFmt : VK_FORMAT_UNDEFINED;
+}
+
 MVKRenderSubpass::MVKRenderSubpass(MVKRenderPass* renderPass,
 								   const VkSubpassDescription* pCreateInfo,
 								   const VkRenderPassInputAttachmentAspectCreateInfo* pInputAspects,
 								   uint32_t viewMask) {
 	_renderPass = renderPass;
 	_subpassIndex = (uint32_t)_renderPass->_subpasses.size();
-	_viewMask = viewMask;
+	_pipelineRenderingCreateInfo.viewMask = viewMask;
 
 	// Add attachments
 	_inputAttachments.reserve(pCreateInfo->inputAttachmentCount);
@@ -535,7 +492,7 @@
 
 	_renderPass = renderPass;
 	_subpassIndex = (uint32_t)_renderPass->_subpasses.size();
-	_viewMask = pCreateInfo->viewMask;
+	_pipelineRenderingCreateInfo.viewMask = pCreateInfo->viewMask;
 
 	// Add attachments
 	_inputAttachments.reserve(pCreateInfo->inputAttachmentCount);
@@ -600,11 +557,13 @@
 #if MVK_APPLE_SILICON
 	isMemorylessAttachment = attachment->getMTLTexture().storageMode == MTLStorageModeMemoryless;
 #endif
+	bool isResuming = mvkIsAnyFlagEnabled(_renderPass->getRenderingFlags(), VK_RENDERING_RESUMING_BIT);
 
 	// Only allow clearing of entire attachment if we're actually
 	// rendering to the entire attachment AND we're in the first subpass.
+	// If the renderpass was suspended, and is now being resumed, load the contents.
 	MTLLoadAction mtlLA;
-	if (loadOverride || !isRenderingEntireAttachment || !isFirstUseOfAttachment(subpass)) {
+	if (loadOverride || isResuming || !isRenderingEntireAttachment || !isFirstUseOfAttachment(subpass)) {
 		mtlLA = MTLLoadActionLoad;
     } else {
         VkAttachmentLoadOp loadOp = isStencil ? _info.stencilLoadOp : _info.loadOp;
@@ -679,14 +638,14 @@
 	VkRect2D renderArea = cmdEncoder->clipToRenderArea({{0, 0}, {kMVKUndefinedLargeUInt32, kMVKUndefinedLargeUInt32}});
 	uint32_t startView, viewCount;
 	do {
-		clearMask = getNextViewMaskGroup(clearMask, &startView, &viewCount);
+		clearMask = mvkGetNextViewMaskGroup(clearMask, &startView, &viewCount);
 		clearRects.push_back({renderArea, startView, viewCount});
 	} while (clearMask);
 }
 
 bool MVKRenderPassAttachment::isFirstUseOfAttachment(MVKRenderSubpass* subpass) {
 	if ( subpass->isMultiview() ) {
-		return _firstUseViewMasks[subpass->_subpassIndex] == subpass->_viewMask;
+		return _firstUseViewMasks[subpass->_subpassIndex] == subpass->_pipelineRenderingCreateInfo.viewMask;
 	} else {
 		return _firstUseSubpassIdx == subpass->_subpassIndex;
 	}
@@ -694,7 +653,7 @@
 
 bool MVKRenderPassAttachment::isLastUseOfAttachment(MVKRenderSubpass* subpass) {
 	if ( subpass->isMultiview() ) {
-		return _lastUseViewMasks[subpass->_subpassIndex] == subpass->_viewMask;
+		return _lastUseViewMasks[subpass->_subpassIndex] == subpass->_pipelineRenderingCreateInfo.viewMask;
 	} else {
 		return _lastUseSubpassIdx == subpass->_subpassIndex;
 	}
@@ -707,7 +666,12 @@
 														  bool canResolveFormat,
 														  bool isStencil,
 														  bool storeOverride) {
-    // If a resolve attachment exists, this attachment must resolve once complete.
+
+	// If the renderpass is going to be suspended, and resumed later, store the contents to preserve them until then.
+	bool isSuspending = mvkIsAnyFlagEnabled(_renderPass->getRenderingFlags(), VK_RENDERING_SUSPENDING_BIT);
+	if (isSuspending) { return MTLStoreActionStore; }
+
+	// If a resolve attachment exists, this attachment must resolve once complete.
     if (hasResolveAttachment && canResolveFormat && !_renderPass->getDevice()->_pMetalFeatures->combinedStoreResolveAction) {
         return MTLStoreActionMultisampleResolve;
     }
@@ -759,7 +723,7 @@
 			_firstUseSubpassIdx = min(spIdx, _firstUseSubpassIdx);
 			_lastUseSubpassIdx = max(spIdx, _lastUseSubpassIdx);
 			if ( subPass.isMultiview() ) {
-				uint32_t viewMask = subPass._viewMask;
+				uint32_t viewMask = subPass._pipelineRenderingCreateInfo.viewMask;
 				std::for_each(_lastUseViewMasks.begin(), _lastUseViewMasks.end(), [viewMask](uint32_t& mask) { mask &= ~viewMask; });
 				_lastUseViewMasks.push_back(viewMask);
 				std::for_each(_firstUseViewMasks.begin(), _firstUseViewMasks.end(), [&viewMask](uint32_t mask) { viewMask &= ~mask; });
@@ -818,8 +782,6 @@
     return { 1, 1 };
 }
 
-MVKRenderSubpass* MVKRenderPass::getSubpass(uint32_t subpassIndex) { return &_subpasses[subpassIndex]; }
-
 bool MVKRenderPass::isMultiview() const { return _subpasses[0].isMultiview(); }
 
 MVKRenderPass::MVKRenderPass(MVKDevice* device,
@@ -876,8 +838,14 @@
 	for (uint32_t i = 0; i < pCreateInfo->attachmentCount; i++) {
 		_attachments.emplace_back(this, &pCreateInfo->pAttachments[i]);
 	}
+
+	// Populate additional subpass info after attachments added.
+	for (auto& mvkSP : _subpasses) {
+		mvkSP.populatePipelineRenderingCreateInfo();
+	}
 }
 
+
 MVKRenderPass::MVKRenderPass(MVKDevice* device,
 							 const VkRenderPassCreateInfo2* pCreateInfo) : MVKVulkanAPIDeviceObject(device) {
 
@@ -896,6 +864,246 @@
 	for (uint32_t i = 0; i < pCreateInfo->attachmentCount; i++) {
 		_attachments.emplace_back(this, &pCreateInfo->pAttachments[i]);
 	}
+
+	// Populate additional subpass info after attachments added.
+	for (auto& mvkSP : _subpasses) {
+		mvkSP.populatePipelineRenderingCreateInfo();
+	}
 }
 
 
+#pragma mark -
+#pragma mark Support functions
+
+// Adds the rendering attachment info to the array of attachment descriptors at the index,
+// and increments the index, for both the base view and the resolve view, if it is present.
+static void mvkAddAttachmentDescriptor(const VkRenderingAttachmentInfo* pAttInfo,
+									   const VkRenderingAttachmentInfo* pStencilAttInfo,
+									   VkAttachmentDescription2 attachmentDescriptors[],
+									   uint32_t& attDescIdx) {
+	VkAttachmentDescription2 attDesc;
+	attDesc.sType = VK_STRUCTURE_TYPE_ATTACHMENT_DESCRIPTION_2;
+	attDesc.pNext = nullptr;
+	attDesc.flags = 0;
+	attDesc.loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
+	attDesc.storeOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+
+	// Handle stencil-only possibility.
+	if ( !pAttInfo ) { pAttInfo = pStencilAttInfo; }
+
+	if (pAttInfo && pAttInfo->imageView) {
+		MVKImageView* mvkImgView = (MVKImageView*)pAttInfo->imageView;
+		attDesc.format = mvkImgView->getVkFormat();
+		attDesc.samples = mvkImgView->getSampleCount();
+		attDesc.loadOp = pAttInfo->loadOp;
+		attDesc.storeOp = pAttInfo->storeOp;
+		attDesc.stencilLoadOp = pStencilAttInfo ? pStencilAttInfo->loadOp : VK_ATTACHMENT_LOAD_OP_DONT_CARE;
+		attDesc.stencilStoreOp = pStencilAttInfo ? pStencilAttInfo->storeOp : VK_ATTACHMENT_STORE_OP_DONT_CARE;
+		attDesc.initialLayout = pAttInfo->imageLayout;
+		attDesc.finalLayout = pAttInfo->imageLayout;
+		attachmentDescriptors[attDescIdx++] = attDesc;
+
+		if (pAttInfo->resolveImageView && pAttInfo->resolveMode != VK_RESOLVE_MODE_NONE) {
+			attDesc.samples = VK_SAMPLE_COUNT_1_BIT;
+			attDesc.loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
+			attDesc.storeOp = VK_ATTACHMENT_STORE_OP_STORE;
+			attDesc.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
+			attDesc.stencilStoreOp = pStencilAttInfo ? VK_ATTACHMENT_STORE_OP_STORE : VK_ATTACHMENT_STORE_OP_DONT_CARE;
+			attDesc.initialLayout = pAttInfo->resolveImageLayout;
+			attDesc.finalLayout = pAttInfo->resolveImageLayout;
+			attachmentDescriptors[attDescIdx++] = attDesc;
+		}
+	}
+}
+
+MVKRenderPass* mvkCreateRenderPass(MVKDevice* device, const VkRenderingInfo* pRenderingInfo) {
+
+	// Renderpass attachments are sequentially indexed in this order:
+	//     [color, color-resolve], ..., ds, ds-resolve
+	// skipping any attachments that do not have a VkImageView
+	uint32_t maxAttDescCnt = (pRenderingInfo->colorAttachmentCount + 1) * 2;
+	VkAttachmentDescription2 attachmentDescriptors[maxAttDescCnt];
+	VkAttachmentReference2 colorAttachmentRefs[pRenderingInfo->colorAttachmentCount];
+	VkAttachmentReference2 resolveAttachmentRefs[pRenderingInfo->colorAttachmentCount];
+
+	VkAttachmentReference2 attRef;
+	attRef.sType = VK_STRUCTURE_TYPE_ATTACHMENT_REFERENCE_2;
+	attRef.pNext = nullptr;
+	attRef.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
+
+	uint32_t attDescIdx = 0;
+	uint32_t caRefIdx = 0;
+	bool hasClrRslvAtt = false;
+	for (uint32_t caIdx = 0; caIdx < pRenderingInfo->colorAttachmentCount; caIdx++) {
+		auto& clrAtt = pRenderingInfo->pColorAttachments[caIdx];
+		if (clrAtt.imageView) {
+			attRef.layout = clrAtt.imageLayout;
+			attRef.attachment = attDescIdx;
+			colorAttachmentRefs[caRefIdx] = attRef;
+
+			if (clrAtt.resolveImageView && clrAtt.resolveMode != VK_RESOLVE_MODE_NONE) {
+				attRef.layout = clrAtt.resolveImageLayout;
+				attRef.attachment = attDescIdx + 1;
+				resolveAttachmentRefs[caRefIdx] = attRef;
+				hasClrRslvAtt = true;
+			}
+			caRefIdx++;
+		}
+		mvkAddAttachmentDescriptor(&clrAtt, nullptr, attachmentDescriptors, attDescIdx);
+	}
+
+	// Combine depth and stencil attachments into one depth-stencil attachment.
+	// If both depth and stencil are present, their views and layouts must match.
+	VkAttachmentReference2 dsAttRef;
+	VkAttachmentReference2 dsRslvAttRef;
+	VkResolveModeFlagBits depthResolveMode = VK_RESOLVE_MODE_NONE;
+	VkResolveModeFlagBits stencilResolveMode = VK_RESOLVE_MODE_NONE;
+
+	attRef.aspectMask = 0;
+	attRef.layout = VK_IMAGE_LAYOUT_UNDEFINED;
+	VkImageLayout rslvLayout = VK_IMAGE_LAYOUT_UNDEFINED;
+
+	if (pRenderingInfo->pDepthAttachment && pRenderingInfo->pDepthAttachment->imageView) {
+		attRef.aspectMask |= VK_IMAGE_ASPECT_DEPTH_BIT;
+		depthResolveMode = pRenderingInfo->pDepthAttachment->resolveMode;
+		attRef.layout = pRenderingInfo->pDepthAttachment->imageLayout;
+		rslvLayout = pRenderingInfo->pDepthAttachment->resolveImageLayout;
+	}
+	if (pRenderingInfo->pStencilAttachment && pRenderingInfo->pStencilAttachment->imageView) {
+		attRef.aspectMask |= VK_IMAGE_ASPECT_STENCIL_BIT;
+		stencilResolveMode = pRenderingInfo->pStencilAttachment->resolveMode;
+		attRef.layout = pRenderingInfo->pStencilAttachment->imageLayout;
+		rslvLayout = pRenderingInfo->pStencilAttachment->resolveImageLayout;
+	}
+
+	attRef.attachment = attRef.aspectMask ? attDescIdx : VK_ATTACHMENT_UNUSED;
+	dsAttRef = attRef;
+
+	attRef.layout = rslvLayout;
+	attRef.attachment = attDescIdx + 1;
+	dsRslvAttRef = attRef;
+
+	mvkAddAttachmentDescriptor(pRenderingInfo->pDepthAttachment,
+							   pRenderingInfo->pStencilAttachment,
+							   attachmentDescriptors, attDescIdx);
+
+	// Depth/stencil resolve handled via VkSubpassDescription2 pNext
+	VkSubpassDescriptionDepthStencilResolve dsRslv;
+	dsRslv.sType = VK_STRUCTURE_TYPE_SUBPASS_DESCRIPTION_DEPTH_STENCIL_RESOLVE;
+	dsRslv.pNext = nullptr;
+	dsRslv.depthResolveMode = depthResolveMode;
+	dsRslv.stencilResolveMode = stencilResolveMode;
+	dsRslv.pDepthStencilResolveAttachment = &dsRslvAttRef;
+	bool hasDSRslvAtt = depthResolveMode != VK_RESOLVE_MODE_NONE || stencilResolveMode != VK_RESOLVE_MODE_NONE;
+
+	// Define the subpass
+	VkSubpassDescription2 spDesc;
+	spDesc.sType = VK_STRUCTURE_TYPE_SUBPASS_DESCRIPTION_2;
+	spDesc.pNext = hasDSRslvAtt ? &dsRslv : nullptr;
+	spDesc.flags = 0;
+	spDesc.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS;
+	spDesc.viewMask = pRenderingInfo->viewMask;
+	spDesc.inputAttachmentCount = 0;
+	spDesc.pInputAttachments = nullptr;
+	spDesc.colorAttachmentCount = caRefIdx;
+	spDesc.pColorAttachments = colorAttachmentRefs;
+	spDesc.pResolveAttachments = hasClrRslvAtt ? resolveAttachmentRefs : nullptr;;
+	spDesc.pDepthStencilAttachment = &dsAttRef;
+	spDesc.preserveAttachmentCount = 0;
+	spDesc.pPreserveAttachments = nullptr;
+
+	// Define the renderpass
+	VkRenderPassCreateInfo2 rpCreateInfo;
+	rpCreateInfo.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO_2;
+	rpCreateInfo.pNext = nullptr;
+	rpCreateInfo.flags = 0;
+	rpCreateInfo.attachmentCount = attDescIdx;
+	rpCreateInfo.pAttachments = attachmentDescriptors;
+	rpCreateInfo.subpassCount = 1;
+	rpCreateInfo.pSubpasses = &spDesc;
+	rpCreateInfo.dependencyCount = 0;
+	rpCreateInfo.pDependencies = nullptr;
+	rpCreateInfo.correlatedViewMaskCount = 0;
+	rpCreateInfo.pCorrelatedViewMasks = nullptr;
+
+	auto* mvkRP = device->createRenderPass(&rpCreateInfo, nullptr);
+	mvkRP->setRenderingFlags(pRenderingInfo->flags);
+	return mvkRP;
+}
+
+uint32_t mvkGetAttachments(const VkRenderingInfo* pRenderingInfo,
+						   MVKImageView* attachments[],
+						   VkClearValue clearValues[]) {
+
+	// Renderpass attachments are sequentially indexed in this order:
+	//     [color, color-resolve], ..., ds, ds-resolve
+	// skipping any attachments that do not have a VkImageView
+	// For consistency, we populate the clear value of any resolve attachments, even though they are ignored.
+	uint32_t attIdx = 0;
+	for (uint32_t caIdx = 0; caIdx < pRenderingInfo->colorAttachmentCount; caIdx++) {
+		auto& clrAtt = pRenderingInfo->pColorAttachments[caIdx];
+		if (clrAtt.imageView) {
+			clearValues[attIdx] = clrAtt.clearValue;
+			attachments[attIdx++] = (MVKImageView*)clrAtt.imageView;
+			if (clrAtt.resolveImageView && clrAtt.resolveMode != VK_RESOLVE_MODE_NONE) {
+				clearValues[attIdx] = clrAtt.clearValue;
+				attachments[attIdx++] = (MVKImageView*)clrAtt.resolveImageView;
+			}
+		}
+	}
+
+	// We need to combine the DS attachments into one
+	auto* pDSAtt = pRenderingInfo->pDepthAttachment ? pRenderingInfo->pDepthAttachment : pRenderingInfo->pStencilAttachment;
+	if (pDSAtt) {
+		if (pDSAtt->imageView) {
+			clearValues[attIdx] = pDSAtt->clearValue;
+			attachments[attIdx++] = (MVKImageView*)pDSAtt->imageView;
+		}
+		if (pDSAtt->resolveImageView && pDSAtt->resolveMode != VK_RESOLVE_MODE_NONE) {
+			clearValues[attIdx] = pDSAtt->clearValue;
+			attachments[attIdx++] = (MVKImageView*)pDSAtt->resolveImageView;
+		}
+	}
+
+	return attIdx;
+}
+
+bool mvkIsColorAttachmentUsed(const VkPipelineRenderingCreateInfo* pRendInfo, uint32_t colorAttIdx) {
+	return pRendInfo && pRendInfo->pColorAttachmentFormats[colorAttIdx];
+}
+
+bool mvkHasColorAttachments(const VkPipelineRenderingCreateInfo* pRendInfo) {
+	if (pRendInfo) {
+		for (uint32_t caIdx = 0; caIdx < pRendInfo->colorAttachmentCount; caIdx++) {
+			if (mvkIsColorAttachmentUsed(pRendInfo, caIdx)) { return true; }
+		}
+	}
+	return false;
+}
+
+VkFormat mvkGetDepthStencilFormat(const VkPipelineRenderingCreateInfo* pRendInfo) {
+	return (pRendInfo
+			? (pRendInfo->depthAttachmentFormat
+			   ? pRendInfo->depthAttachmentFormat
+			   : pRendInfo->stencilAttachmentFormat)
+			: VK_FORMAT_UNDEFINED);
+}
+
+uint32_t mvkGetNextViewMaskGroup(uint32_t viewMask, uint32_t* startView, uint32_t* viewCount, uint32_t *groupMask) {
+	// First, find the first set bit. This is the start of the next clump of views to be rendered.
+	// n.b. ffs(3) returns a 1-based index. This actually bit me during development of this feature.
+	int pos = ffs(viewMask) - 1;
+	int end = pos;
+	if (groupMask) { *groupMask = 0; }
+	// Now we'll step through the bits one at a time until we find a bit that isn't set.
+	// This is one past the end of the next clump. Clear the bits as we go, so we can use
+	// ffs(3) again on the next clump.
+	// TODO: Find a way to make this faster.
+	while (viewMask & (1 << end)) {
+		if (groupMask) { *groupMask |= viewMask & (1 << end); }
+		viewMask &= ~(1 << (end++));
+	}
+	if (startView) { *startView = pos; }
+	if (viewCount) { *viewCount = end - pos; }
+	return viewMask;
+}
diff --git a/MoltenVK/MoltenVK/GPUObjects/MVKSync.mm b/MoltenVK/MoltenVK/GPUObjects/MVKSync.mm
index d173b4d..60af66b 100644
--- a/MoltenVK/MoltenVK/GPUObjects/MVKSync.mm
+++ b/MoltenVK/MoltenVK/GPUObjects/MVKSync.mm
@@ -302,7 +302,7 @@
 	MVKTimelineSemaphore(device, pCreateInfo, pTypeCreateInfo, pExportInfo, pImportInfo),
 	_value(pTypeCreateInfo ? pTypeCreateInfo->initialValue : 0) {
 
-	if (pExportInfo && mvkIsAnyFlagEnabled(pExportInfo->exportObjectTypes, VK_EXPORT_METAL_OBJECT_TYPE_METAL_SHARED_EVENT_BIT_EXT)) {
+	if (pExportInfo && pExportInfo->exportObjectType == VK_EXPORT_METAL_OBJECT_TYPE_METAL_SHARED_EVENT_BIT_EXT) {
 		setConfigurationResult(reportError(VK_ERROR_INITIALIZATION_FAILED, "vkCreateEvent(): MTLSharedEvent is not available on this platform."));
 	}
 }
@@ -454,7 +454,7 @@
 								   const VkImportMetalSharedEventInfoEXT* pImportInfo) :
 	MVKEvent(device, pCreateInfo, pExportInfo, pImportInfo), _blocker(false, 1), _inlineSignalStatus(false) {
 
-	if (pExportInfo && mvkIsAnyFlagEnabled(pExportInfo->exportObjectTypes, VK_EXPORT_METAL_OBJECT_TYPE_METAL_SHARED_EVENT_BIT_EXT)) {
+	if (pExportInfo && pExportInfo->exportObjectType == VK_EXPORT_METAL_OBJECT_TYPE_METAL_SHARED_EVENT_BIT_EXT) {
 		setConfigurationResult(reportError(VK_ERROR_INITIALIZATION_FAILED, "vkCreateEvent(): MTLSharedEvent is not available on this platform."));
 	}
 }
diff --git a/MoltenVK/MoltenVK/Layers/MVKExtensions.def b/MoltenVK/MoltenVK/Layers/MVKExtensions.def
index d48f3aa..dc5751d 100644
--- a/MoltenVK/MoltenVK/Layers/MVKExtensions.def
+++ b/MoltenVK/MoltenVK/Layers/MVKExtensions.def
@@ -16,13 +16,15 @@
  * limitations under the License.
  */
 
-// To use this file, define the macro MVK_EXTENSION(var, EXT, type), then #include this file.
+// To use this file, define the macro MVK_EXTENSION(var, EXT, type, macos, ios), then #include this file.
 // To add a new extension, simply add an MVK_EXTENSION line below. The macro takes the
 // portion of the extension name without the leading "VK_", both in lowercase and uppercase,
 // plus a value representing the extension type (instance/device/...).
 // The last line in the list must be an MVK_EXTENSION_LAST line; this is used in the MVKExtensionList
 // constructor to avoid a dangling ',' at the end of the initializer list.
 
+#define MVK_NA  kMVKOSVersionUnsupported
+
 #ifndef MVK_EXTENSION_INSTANCE
 #define MVK_EXTENSION_INSTANCE	0
 #endif
@@ -36,85 +38,90 @@
 #endif
 
 #ifndef MVK_EXTENSION_LAST
-#define MVK_EXTENSION_LAST(var, EXT, type) MVK_EXTENSION(var, EXT, type)
+#define MVK_EXTENSION_LAST(var, EXT, type, macos, ios) MVK_EXTENSION(var, EXT, type, macos, ios)
 #endif
 
-MVK_EXTENSION(KHR_16bit_storage, KHR_16BIT_STORAGE, DEVICE)
-MVK_EXTENSION(KHR_8bit_storage, KHR_8BIT_STORAGE, DEVICE)
-MVK_EXTENSION(KHR_bind_memory2, KHR_BIND_MEMORY_2, DEVICE)
-MVK_EXTENSION(KHR_create_renderpass2, KHR_CREATE_RENDERPASS_2, DEVICE)
-MVK_EXTENSION(KHR_dedicated_allocation, KHR_DEDICATED_ALLOCATION, DEVICE)
-MVK_EXTENSION(KHR_depth_stencil_resolve, KHR_DEPTH_STENCIL_RESOLVE, DEVICE)
-MVK_EXTENSION(KHR_descriptor_update_template, KHR_DESCRIPTOR_UPDATE_TEMPLATE, DEVICE)
-MVK_EXTENSION(KHR_device_group, KHR_DEVICE_GROUP, DEVICE)
-MVK_EXTENSION(KHR_device_group_creation, KHR_DEVICE_GROUP_CREATION, INSTANCE)
-MVK_EXTENSION(KHR_driver_properties, KHR_DRIVER_PROPERTIES, DEVICE)
-MVK_EXTENSION(KHR_external_fence, KHR_EXTERNAL_FENCE, DEVICE)
-MVK_EXTENSION(KHR_external_fence_capabilities, KHR_EXTERNAL_FENCE_CAPABILITIES, INSTANCE)
-MVK_EXTENSION(KHR_external_memory, KHR_EXTERNAL_MEMORY, DEVICE)
-MVK_EXTENSION(KHR_external_memory_capabilities, KHR_EXTERNAL_MEMORY_CAPABILITIES, INSTANCE)
-MVK_EXTENSION(KHR_external_semaphore, KHR_EXTERNAL_SEMAPHORE, DEVICE)
-MVK_EXTENSION(KHR_external_semaphore_capabilities, KHR_EXTERNAL_SEMAPHORE_CAPABILITIES, INSTANCE)
-MVK_EXTENSION(KHR_get_memory_requirements2, KHR_GET_MEMORY_REQUIREMENTS_2, DEVICE)
-MVK_EXTENSION(KHR_get_physical_device_properties2, KHR_GET_PHYSICAL_DEVICE_PROPERTIES_2, INSTANCE)
-MVK_EXTENSION(KHR_get_surface_capabilities2, KHR_GET_SURFACE_CAPABILITIES_2, INSTANCE)
-MVK_EXTENSION(KHR_imageless_framebuffer, KHR_IMAGELESS_FRAMEBUFFER, DEVICE)
-MVK_EXTENSION(KHR_image_format_list, KHR_IMAGE_FORMAT_LIST, DEVICE)
-MVK_EXTENSION(KHR_maintenance1, KHR_MAINTENANCE1, DEVICE)
-MVK_EXTENSION(KHR_maintenance2, KHR_MAINTENANCE2, DEVICE)
-MVK_EXTENSION(KHR_maintenance3, KHR_MAINTENANCE3, DEVICE)
-MVK_EXTENSION(KHR_multiview, KHR_MULTIVIEW, DEVICE)
-MVK_EXTENSION(KHR_portability_subset, KHR_PORTABILITY_SUBSET, DEVICE)
-MVK_EXTENSION(KHR_push_descriptor, KHR_PUSH_DESCRIPTOR, DEVICE)
-MVK_EXTENSION(KHR_relaxed_block_layout, KHR_RELAXED_BLOCK_LAYOUT, DEVICE)
-MVK_EXTENSION(KHR_sampler_mirror_clamp_to_edge, KHR_SAMPLER_MIRROR_CLAMP_TO_EDGE, DEVICE)
-MVK_EXTENSION(KHR_sampler_ycbcr_conversion, KHR_SAMPLER_YCBCR_CONVERSION, DEVICE)
-MVK_EXTENSION(KHR_shader_draw_parameters, KHR_SHADER_DRAW_PARAMETERS, DEVICE)
-MVK_EXTENSION(KHR_shader_float16_int8, KHR_SHADER_FLOAT16_INT8, DEVICE)
-MVK_EXTENSION(KHR_shader_subgroup_extended_types, KHR_SHADER_SUBGROUP_EXTENDED_TYPES, DEVICE)
-MVK_EXTENSION(KHR_storage_buffer_storage_class, KHR_STORAGE_BUFFER_STORAGE_CLASS, DEVICE)
-MVK_EXTENSION(KHR_surface, KHR_SURFACE, INSTANCE)
-MVK_EXTENSION(KHR_swapchain, KHR_SWAPCHAIN, DEVICE)
-MVK_EXTENSION(KHR_swapchain_mutable_format, KHR_SWAPCHAIN_MUTABLE_FORMAT, DEVICE)
-MVK_EXTENSION(KHR_timeline_semaphore, KHR_TIMELINE_SEMAPHORE, DEVICE)
-MVK_EXTENSION(KHR_uniform_buffer_standard_layout, KHR_UNIFORM_BUFFER_STANDARD_LAYOUT, DEVICE)
-MVK_EXTENSION(KHR_variable_pointers, KHR_VARIABLE_POINTERS, DEVICE)
-MVK_EXTENSION(EXT_debug_marker, EXT_DEBUG_MARKER, DEVICE)
-MVK_EXTENSION(EXT_debug_report, EXT_DEBUG_REPORT, INSTANCE)
-MVK_EXTENSION(EXT_debug_utils, EXT_DEBUG_UTILS, INSTANCE)
-MVK_EXTENSION(EXT_descriptor_indexing, EXT_DESCRIPTOR_INDEXING, DEVICE)
-MVK_EXTENSION(EXT_fragment_shader_interlock, EXT_FRAGMENT_SHADER_INTERLOCK, DEVICE)
-MVK_EXTENSION(EXT_hdr_metadata, EXT_HDR_METADATA, DEVICE)
-MVK_EXTENSION(EXT_host_query_reset, EXT_HOST_QUERY_RESET, DEVICE)
-MVK_EXTENSION(EXT_image_robustness, EXT_IMAGE_ROBUSTNESS, DEVICE)
-MVK_EXTENSION(EXT_inline_uniform_block, EXT_INLINE_UNIFORM_BLOCK, DEVICE)
-MVK_EXTENSION(EXT_memory_budget, EXT_MEMORY_BUDGET, DEVICE)
-MVK_EXTENSION(EXT_metal_objects, EXT_METAL_OBJECTS, DEVICE)
-MVK_EXTENSION(EXT_metal_surface, EXT_METAL_SURFACE, INSTANCE)
-MVK_EXTENSION(EXT_post_depth_coverage, EXT_POST_DEPTH_COVERAGE, DEVICE)
-MVK_EXTENSION(EXT_private_data, EXT_PRIVATE_DATA, DEVICE)
-MVK_EXTENSION(EXT_robustness2, EXT_ROBUSTNESS_2, DEVICE)
-MVK_EXTENSION(EXT_scalar_block_layout, EXT_SCALAR_BLOCK_LAYOUT, DEVICE)
-MVK_EXTENSION(EXT_shader_stencil_export, EXT_SHADER_STENCIL_EXPORT, DEVICE)
-MVK_EXTENSION(EXT_shader_viewport_index_layer, EXT_SHADER_VIEWPORT_INDEX_LAYER, DEVICE)
-MVK_EXTENSION(EXT_subgroup_size_control, EXT_SUBGROUP_SIZE_CONTROL, DEVICE)
-MVK_EXTENSION(EXT_swapchain_colorspace, EXT_SWAPCHAIN_COLOR_SPACE, INSTANCE)
-MVK_EXTENSION(EXT_texel_buffer_alignment, EXT_TEXEL_BUFFER_ALIGNMENT, DEVICE)
-MVK_EXTENSION(EXT_texture_compression_astc_hdr, EXT_TEXTURE_COMPRESSION_ASTC_HDR, DEVICE)
-MVK_EXTENSION(EXT_vertex_attribute_divisor, EXT_VERTEX_ATTRIBUTE_DIVISOR, DEVICE)
-MVK_EXTENSION(AMD_gpu_shader_half_float, AMD_GPU_SHADER_HALF_FLOAT, DEVICE)
-MVK_EXTENSION(AMD_negative_viewport_height, AMD_NEGATIVE_VIEWPORT_HEIGHT, DEVICE)
-MVK_EXTENSION(AMD_shader_image_load_store_lod, AMD_SHADER_IMAGE_LOAD_STORE_LOD, DEVICE)
-MVK_EXTENSION(AMD_shader_trinary_minmax, AMD_SHADER_TRINARY_MINMAX, DEVICE)
-MVK_EXTENSION(IMG_format_pvrtc, IMG_FORMAT_PVRTC, DEVICE)
-MVK_EXTENSION(INTEL_shader_integer_functions2, INTEL_SHADER_INTEGER_FUNCTIONS_2, DEVICE)
-MVK_EXTENSION(GOOGLE_display_timing, GOOGLE_DISPLAY_TIMING, DEVICE)
-MVK_EXTENSION(MVK_ios_surface, MVK_IOS_SURFACE, INSTANCE)
-MVK_EXTENSION(MVK_macos_surface, MVK_MACOS_SURFACE, INSTANCE)
-MVK_EXTENSION(MVK_moltenvk, MVK_MOLTENVK, INSTANCE)
-MVK_EXTENSION_LAST(NV_glsl_shader, NV_GLSL_SHADER, DEVICE)
+MVK_EXTENSION(KHR_16bit_storage,                   KHR_16BIT_STORAGE,                    DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_8bit_storage,                    KHR_8BIT_STORAGE,                     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_bind_memory2,                    KHR_BIND_MEMORY_2,                    DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_create_renderpass2,              KHR_CREATE_RENDERPASS_2,              DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_dedicated_allocation,            KHR_DEDICATED_ALLOCATION,             DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_depth_stencil_resolve,           KHR_DEPTH_STENCIL_RESOLVE,            DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_descriptor_update_template,      KHR_DESCRIPTOR_UPDATE_TEMPLATE,       DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_device_group,                    KHR_DEVICE_GROUP,                     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_device_group_creation,           KHR_DEVICE_GROUP_CREATION,            INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(KHR_driver_properties,               KHR_DRIVER_PROPERTIES,                DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_dynamic_rendering,               KHR_DYNAMIC_RENDERING,                DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_external_fence,                  KHR_EXTERNAL_FENCE,                   DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_external_fence_capabilities,     KHR_EXTERNAL_FENCE_CAPABILITIES,      INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(KHR_external_memory,                 KHR_EXTERNAL_MEMORY,                  DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_external_memory_capabilities,    KHR_EXTERNAL_MEMORY_CAPABILITIES,     INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(KHR_external_semaphore,              KHR_EXTERNAL_SEMAPHORE,               DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_external_semaphore_capabilities, KHR_EXTERNAL_SEMAPHORE_CAPABILITIES,  INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(KHR_get_memory_requirements2,        KHR_GET_MEMORY_REQUIREMENTS_2,        DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_get_physical_device_properties2, KHR_GET_PHYSICAL_DEVICE_PROPERTIES_2, INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(KHR_get_surface_capabilities2,       KHR_GET_SURFACE_CAPABILITIES_2,       INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(KHR_imageless_framebuffer,           KHR_IMAGELESS_FRAMEBUFFER,            DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_image_format_list,               KHR_IMAGE_FORMAT_LIST,                DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_maintenance1,                    KHR_MAINTENANCE1,                     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_maintenance2,                    KHR_MAINTENANCE2,                     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_maintenance3,                    KHR_MAINTENANCE3,                     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_multiview,                       KHR_MULTIVIEW,                        DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_portability_subset,              KHR_PORTABILITY_SUBSET,               DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_push_descriptor,                 KHR_PUSH_DESCRIPTOR,                  DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_relaxed_block_layout,            KHR_RELAXED_BLOCK_LAYOUT,             DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_sampler_mirror_clamp_to_edge,    KHR_SAMPLER_MIRROR_CLAMP_TO_EDGE,     DEVICE,   10.11, 14.0)
+MVK_EXTENSION(KHR_sampler_ycbcr_conversion,        KHR_SAMPLER_YCBCR_CONVERSION,         DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_separate_depth_stencil_layouts,  KHR_SEPARATE_DEPTH_STENCIL_LAYOUTS,   DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_shader_draw_parameters,          KHR_SHADER_DRAW_PARAMETERS,           DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_shader_float16_int8,             KHR_SHADER_FLOAT16_INT8,              DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_shader_subgroup_extended_types,  KHR_SHADER_SUBGROUP_EXTENDED_TYPES,   DEVICE,   10.14, 13.0)
+MVK_EXTENSION(KHR_storage_buffer_storage_class,    KHR_STORAGE_BUFFER_STORAGE_CLASS,     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_surface,                         KHR_SURFACE,                          INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(KHR_swapchain,                       KHR_SWAPCHAIN,                        DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_swapchain_mutable_format,        KHR_SWAPCHAIN_MUTABLE_FORMAT,         DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_timeline_semaphore,              KHR_TIMELINE_SEMAPHORE,               DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_uniform_buffer_standard_layout,  KHR_UNIFORM_BUFFER_STANDARD_LAYOUT,   DEVICE,   10.11,  8.0)
+MVK_EXTENSION(KHR_variable_pointers,               KHR_VARIABLE_POINTERS,                DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_debug_marker,                    EXT_DEBUG_MARKER,                     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_debug_report,                    EXT_DEBUG_REPORT,                     INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(EXT_debug_utils,                     EXT_DEBUG_UTILS,                      INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(EXT_descriptor_indexing,             EXT_DESCRIPTOR_INDEXING,              DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_fragment_shader_interlock,       EXT_FRAGMENT_SHADER_INTERLOCK,        DEVICE,   10.13, 11.0)
+MVK_EXTENSION(EXT_hdr_metadata,                    EXT_HDR_METADATA,                     DEVICE,   10.15, MVK_NA)
+MVK_EXTENSION(EXT_host_query_reset,                EXT_HOST_QUERY_RESET,                 DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_image_robustness,                EXT_IMAGE_ROBUSTNESS,                 DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_inline_uniform_block,            EXT_INLINE_UNIFORM_BLOCK,             DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_memory_budget,                   EXT_MEMORY_BUDGET,                    DEVICE,   10.13, 11.0)
+MVK_EXTENSION(EXT_metal_objects,                   EXT_METAL_OBJECTS,                    DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_metal_surface,                   EXT_METAL_SURFACE,                    INSTANCE, 10.11,  8.0)
+MVK_EXTENSION(EXT_post_depth_coverage,             EXT_POST_DEPTH_COVERAGE,              DEVICE,   11.0,  11.0)
+MVK_EXTENSION(EXT_private_data,                    EXT_PRIVATE_DATA,                     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_robustness2,                     EXT_ROBUSTNESS_2,                     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_sample_locations,                EXT_SAMPLE_LOCATIONS,                 DEVICE,   10.13, 11.0)
+MVK_EXTENSION(EXT_scalar_block_layout,             EXT_SCALAR_BLOCK_LAYOUT,              DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_separate_stencil_usage,          EXT_SEPARATE_STENCIL_USAGE,           DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_shader_stencil_export,           EXT_SHADER_STENCIL_EXPORT,            DEVICE,   10.14, 12.0)
+MVK_EXTENSION(EXT_shader_viewport_index_layer,     EXT_SHADER_VIEWPORT_INDEX_LAYER,      DEVICE,   10.11,  8.0)
+MVK_EXTENSION(EXT_subgroup_size_control,           EXT_SUBGROUP_SIZE_CONTROL,            DEVICE,   10.14, 13.0)
+MVK_EXTENSION(EXT_swapchain_colorspace,            EXT_SWAPCHAIN_COLOR_SPACE,            INSTANCE, 10.11,  9.0)
+MVK_EXTENSION(EXT_texel_buffer_alignment,          EXT_TEXEL_BUFFER_ALIGNMENT,           DEVICE,   10.13, 11.0)
+MVK_EXTENSION(EXT_texture_compression_astc_hdr,    EXT_TEXTURE_COMPRESSION_ASTC_HDR,     DEVICE,   11.0,  13.0)
+MVK_EXTENSION(EXT_vertex_attribute_divisor,        EXT_VERTEX_ATTRIBUTE_DIVISOR,         DEVICE,   10.11,  8.0)
+MVK_EXTENSION(AMD_gpu_shader_half_float,           AMD_GPU_SHADER_HALF_FLOAT,            DEVICE,   10.11,  8.0)
+MVK_EXTENSION(AMD_negative_viewport_height,        AMD_NEGATIVE_VIEWPORT_HEIGHT,         DEVICE,   10.11,  8.0)
+MVK_EXTENSION(AMD_shader_image_load_store_lod,     AMD_SHADER_IMAGE_LOAD_STORE_LOD,      DEVICE,   11.0,   8.0)
+MVK_EXTENSION(AMD_shader_trinary_minmax,           AMD_SHADER_TRINARY_MINMAX,            DEVICE,   10.14, 12.0)
+MVK_EXTENSION(IMG_format_pvrtc,                    IMG_FORMAT_PVRTC,                     DEVICE,   11.0,   8.0)
+MVK_EXTENSION(INTEL_shader_integer_functions2,     INTEL_SHADER_INTEGER_FUNCTIONS_2,     DEVICE,   10.11,  8.0)
+MVK_EXTENSION(GOOGLE_display_timing,               GOOGLE_DISPLAY_TIMING,                DEVICE,   10.11,  8.0)
+MVK_EXTENSION(MVK_ios_surface,                     MVK_IOS_SURFACE,                      INSTANCE, MVK_NA, 8.0)
+MVK_EXTENSION(MVK_macos_surface,                   MVK_MACOS_SURFACE,                    INSTANCE, 10.11, MVK_NA)
+MVK_EXTENSION(MVK_moltenvk,                        MVK_MOLTENVK,                         INSTANCE, 10.11,  8.0)
+MVK_EXTENSION_LAST(NV_glsl_shader,                 NV_GLSL_SHADER,                       DEVICE,   10.11,  8.0)
 
 #undef MVK_EXTENSION
 #undef MVK_EXTENSION_LAST
 #undef MVK_EXTENSION_INSTANCE
 #undef MVK_EXTENSION_DEVICE
+#undef MVK_NA
diff --git a/MoltenVK/MoltenVK/Layers/MVKExtensions.h b/MoltenVK/MoltenVK/Layers/MVKExtensions.h
index 3b4d884..5ec5153 100644
--- a/MoltenVK/MoltenVK/Layers/MVKExtensions.h
+++ b/MoltenVK/MoltenVK/Layers/MVKExtensions.h
@@ -52,7 +52,7 @@
 
 	union {
 		struct {
-#define MVK_EXTENSION(var, EXT, type) MVKExtension vk_ ##var;
+#define MVK_EXTENSION(var, EXT, type, macos, ios) MVKExtension vk_ ##var;
 #include "MVKExtensions.def"
 		};
 		MVKExtension extensionArray;
diff --git a/MoltenVK/MoltenVK/Layers/MVKExtensions.mm b/MoltenVK/MoltenVK/Layers/MVKExtensions.mm
index 8866c93..834f96b 100644
--- a/MoltenVK/MoltenVK/Layers/MVKExtensions.mm
+++ b/MoltenVK/MoltenVK/Layers/MVKExtensions.mm
@@ -39,13 +39,12 @@
 }
 
 // Extension properties
-#define MVK_EXTENSION(var, EXT, type) \
+#define MVK_EXTENSION(var, EXT, type, macos, ios) \
 static VkExtensionProperties kVkExtProps_ ##EXT = mvkMakeExtProps(VK_ ##EXT ##_EXTENSION_NAME, VK_ ##EXT ##_SPEC_VERSION);
 #include "MVKExtensions.def"
 
 // Returns whether the specified properties are valid for this platform
 static bool mvkIsSupportedOnPlatform(VkExtensionProperties* pProperties) {
-#define MVK_NA  kMVKOSVersionUnsupported
 #define MVK_EXTENSION_MIN_OS(EXT, MAC, IOS) \
 	if (pProperties == &kVkExtProps_##EXT) { return mvkOSVersionIsAtLeast(MAC, IOS); }
 
@@ -53,6 +52,7 @@
 	// only advertise those supported extensions that have been specifically configured.
 	auto advExtns = mvkConfig().advertiseExtensions;
 	if ( !mvkIsAnyFlagEnabled(advExtns, MVK_CONFIG_ADVERTISE_EXTENSIONS_ALL) ) {
+#define MVK_NA  kMVKOSVersionUnsupported
 		if (mvkIsAnyFlagEnabled(advExtns, MVK_CONFIG_ADVERTISE_EXTENSIONS_MOLTENVK)) {
 			MVK_EXTENSION_MIN_OS(MVK_MOLTENVK,                         10.11,  8.0)
 		}
@@ -67,32 +67,17 @@
 			MVK_EXTENSION_MIN_OS(KHR_PORTABILITY_SUBSET,               10.11,  8.0)
 			MVK_EXTENSION_MIN_OS(KHR_GET_PHYSICAL_DEVICE_PROPERTIES_2, 10.11,  8.0)
 		}
+#undef MVK_NA
 
 		return false;
 	}
 
-	MVK_EXTENSION_MIN_OS(MVK_IOS_SURFACE,                    MVK_NA, 8.0)
-	MVK_EXTENSION_MIN_OS(MVK_MACOS_SURFACE,                  10.11,  MVK_NA)
-
-	MVK_EXTENSION_MIN_OS(EXT_HDR_METADATA,                   10.15,  MVK_NA)
-	MVK_EXTENSION_MIN_OS(AMD_SHADER_IMAGE_LOAD_STORE_LOD,    10.16,  8.0)
-	MVK_EXTENSION_MIN_OS(IMG_FORMAT_PVRTC,                   10.16,  8.0)
-	MVK_EXTENSION_MIN_OS(KHR_SAMPLER_MIRROR_CLAMP_TO_EDGE,   10.11,  14.0)
-	MVK_EXTENSION_MIN_OS(EXT_SWAPCHAIN_COLOR_SPACE,          10.11,  9.0)
-	MVK_EXTENSION_MIN_OS(KHR_SHADER_SUBGROUP_EXTENDED_TYPES, 10.14,  13.0)
-	MVK_EXTENSION_MIN_OS(EXT_FRAGMENT_SHADER_INTERLOCK,      10.13,  11.0)
-	MVK_EXTENSION_MIN_OS(EXT_MEMORY_BUDGET,                  10.13,  11.0)
-	MVK_EXTENSION_MIN_OS(EXT_POST_DEPTH_COVERAGE,            10.16,  11.0)
-	MVK_EXTENSION_MIN_OS(EXT_SHADER_STENCIL_EXPORT,          10.14,  12.0)
-	MVK_EXTENSION_MIN_OS(EXT_SUBGROUP_SIZE_CONTROL,          10.14,  13.0)
-	MVK_EXTENSION_MIN_OS(EXT_TEXEL_BUFFER_ALIGNMENT,         10.13,  11.0)
-	MVK_EXTENSION_MIN_OS(EXT_TEXTURE_COMPRESSION_ASTC_HDR,   10.16,  13.0)
-	MVK_EXTENSION_MIN_OS(AMD_SHADER_TRINARY_MINMAX,          10.14,  12.0)
-
-	return true;
-
-#undef MVK_NA
+	// Otherwise, emumerate all available extensions to match the extension being validated for OS support.
+#define MVK_EXTENSION(var, EXT, type, macos, ios)  MVK_EXTENSION_MIN_OS(EXT, macos, ios)
+#include "MVKExtensions.def"
 #undef MVK_EXTENSION_MIN_OS
+
+	return false;
 }
 
 // Disable by default unless asked to enable for platform and the extension is valid for this platform
@@ -106,8 +91,8 @@
 #pragma mark MVKExtensionList
 
 MVKExtensionList::MVKExtensionList(MVKVulkanAPIObject* apiObject, bool enableForPlatform) : _apiObject(apiObject),
-#define MVK_EXTENSION_LAST(var, EXT, type)		vk_ ##var(&kVkExtProps_ ##EXT, enableForPlatform)
-#define MVK_EXTENSION(var, EXT, type)			MVK_EXTENSION_LAST(var, EXT, type),
+#define MVK_EXTENSION_LAST(var, EXT, type, macos, ios)		vk_ ##var(&kVkExtProps_ ##EXT, enableForPlatform)
+#define MVK_EXTENSION(var, EXT, type, macos, ios)			MVK_EXTENSION_LAST(var, EXT, type, macos, ios),
 #include "MVKExtensions.def"
 {
 	initCount();
@@ -118,7 +103,7 @@
 void MVKExtensionList::initCount() {
 	_count = 0;
 
-#define MVK_EXTENSION(var, EXT, type) _count++;
+#define MVK_EXTENSION(var, EXT, type, macos, ios) _count++;
 #include "MVKExtensions.def"
 }
 
@@ -127,14 +112,14 @@
 void MVKExtensionList::disableAllButEnabledInstanceExtensions() {
 #define MVK_EXTENSION_INSTANCE         true
 #define MVK_EXTENSION_DEVICE           false
-#define MVK_EXTENSION(var, EXT, type)  MVK_ENSURE_EXTENSION_TYPE(var, EXT, type)
+#define MVK_EXTENSION(var, EXT, type, macos, ios)  MVK_ENSURE_EXTENSION_TYPE(var, EXT, type)
 #include "MVKExtensions.def"
 }
 
 void MVKExtensionList::disableAllButEnabledDeviceExtensions() {
 #define MVK_EXTENSION_INSTANCE         false
 #define MVK_EXTENSION_DEVICE           true
-#define MVK_EXTENSION(var, EXT, type)  MVK_ENSURE_EXTENSION_TYPE(var, EXT, type)
+#define MVK_EXTENSION(var, EXT, type, macos, ios)  MVK_ENSURE_EXTENSION_TYPE(var, EXT, type)
 #include "MVKExtensions.def"
 }
 
diff --git a/MoltenVK/MoltenVK/Utility/MVKFoundation.h b/MoltenVK/MoltenVK/Utility/MVKFoundation.h
index 5e79965..30d0a4b 100644
--- a/MoltenVK/MoltenVK/Utility/MVKFoundation.h
+++ b/MoltenVK/MoltenVK/Utility/MVKFoundation.h
@@ -442,6 +442,11 @@
 	const Type* end() const { return &data[size]; }
 	const Type& operator[]( const size_t i ) const { return data[i]; }
 	Type& operator[]( const size_t i ) { return data[i]; }
+	MVKArrayRef<Type>& operator=(const MVKArrayRef<Type>& other) {
+		data = other.data;
+		*(size_t*)&size = other.size;
+		return *this;
+	}
 	MVKArrayRef() : MVKArrayRef(nullptr, 0) {}
 	MVKArrayRef(Type* d, size_t s) : data(d), size(s) {}
 };
diff --git a/MoltenVK/MoltenVK/Vulkan/vk_mvk_moltenvk.mm b/MoltenVK/MoltenVK/Vulkan/vk_mvk_moltenvk.mm
index 39b3994..72a8ac6 100644
--- a/MoltenVK/MoltenVK/Vulkan/vk_mvk_moltenvk.mm
+++ b/MoltenVK/MoltenVK/Vulkan/vk_mvk_moltenvk.mm
@@ -141,7 +141,7 @@
     VkQueue                                     queue,
     id<MTLCommandQueue>*                        pMTLCommandQueue) {
 
-    MVKQueue* mvkQueue = (MVKQueue*)queue;
+    MVKQueue* mvkQueue = MVKQueue::getMVKQueue(queue);
     *pMTLCommandQueue = mvkQueue->getMTLCommandQueue();
 }
 
diff --git a/MoltenVK/MoltenVK/Vulkan/vulkan.mm b/MoltenVK/MoltenVK/Vulkan/vulkan.mm
index 31ce334..10cb874 100644
--- a/MoltenVK/MoltenVK/Vulkan/vulkan.mm
+++ b/MoltenVK/MoltenVK/Vulkan/vulkan.mm
@@ -2375,6 +2375,28 @@
 
 
 #pragma mark -
+#pragma mark VK_KHR_dynamic_rendering extension
+
+void vkCmdBeginRenderingKHR(
+	VkCommandBuffer                             commandBuffer,
+	const VkRenderingInfo*                      pRenderingInfo) {
+
+	MVKTraceVulkanCallStart();
+	MVKAddCmdFrom3Thresholds(BeginRendering, pRenderingInfo->colorAttachmentCount,
+							 1, 2, 4, commandBuffer, pRenderingInfo);
+	MVKTraceVulkanCallEnd();
+}
+
+void vkCmdEndRenderingKHR(
+	VkCommandBuffer                             commandBuffer) {
+
+	MVKTraceVulkanCallStart();
+	MVKAddCmd(EndRendering, commandBuffer);
+	MVKTraceVulkanCallEnd();
+}
+
+
+#pragma mark -
 #pragma mark VK_KHR_descriptor_update_template extension
 
 MVK_PUBLIC_VULKAN_CORE_ALIAS(vkCreateDescriptorUpdateTemplate);
@@ -3086,6 +3108,29 @@
 }
 
 #pragma mark -
+#pragma mark VK_EXT_sample_locations extension
+
+void vkGetPhysicalDeviceMultisamplePropertiesEXT(
+	VkPhysicalDevice                            physicalDevice,
+	VkSampleCountFlagBits                       samples,
+	VkMultisamplePropertiesEXT*                 pMultisampleProperties) {
+
+	MVKTraceVulkanCallStart();
+	MVKPhysicalDevice* mvkPD = MVKPhysicalDevice::getMVKPhysicalDevice(physicalDevice);
+	mvkPD->getMultisampleProperties(samples, pMultisampleProperties);
+	MVKTraceVulkanCallEnd();
+}
+
+void vkCmdSetSampleLocationsEXT(
+	VkCommandBuffer                             commandBuffer,
+	const VkSampleLocationsInfoEXT*             pSampleLocationsInfo) {
+
+	MVKTraceVulkanCallStart();
+	MVKAddCmd(SetSampleLocations, commandBuffer, pSampleLocationsInfo);
+	MVKTraceVulkanCallEnd();
+}
+
+#pragma mark -
 #pragma mark VK_GOOGLE_display_timing extension
 
 MVK_PUBLIC_VULKAN_SYMBOL VkResult vkGetRefreshCycleDurationGOOGLE(
diff --git a/MoltenVK/icd/MoltenVK_icd.json b/MoltenVK/icd/MoltenVK_icd.json
index 1b1685d..9c9c182 100644
--- a/MoltenVK/icd/MoltenVK_icd.json
+++ b/MoltenVK/icd/MoltenVK_icd.json
@@ -2,6 +2,7 @@
     "file_format_version" : "1.0.0",
     "ICD": {
         "library_path": "./libMoltenVK.dylib",
-        "api_version" : "1.1.0"
+        "api_version" : "1.1.0",
+        "is_portability_driver" : true
     }
 }
diff --git a/README.md b/README.md
index a78d032..329c8c3 100644
--- a/README.md
+++ b/README.md
@@ -37,7 +37,7 @@
 The recommended method for developing a *Vulkan* application for *macOS* is to use the 
 [*Vulkan SDK*](https://vulkan.lunarg.com/sdk/home).
 
-The *Vulkan SDK* includes a  **MoltenVK** runtime library for *macOS*. *Vulkan* is a layered 
+The *Vulkan SDK* includes a **MoltenVK** runtime library for *macOS*. *Vulkan* is a layered 
 architecture that allows applications to add additional functionality without modifying the 
 application itself. The *Validation Layers* included in the *Vulkan SDK* are an essential debugging
 tool for application developers because they identify inappropriate use of the *Vulkan API*. 
@@ -46,6 +46,14 @@
 Refer to the *Vulkan SDK [Getting Started](https://vulkan.lunarg.com/doc/sdk/latest/mac/getting_started.html)* 
 document for more info.
 
+Because **MoltenVK** supports the `VK_KHR_portability_subset` extension, when using the 
+*Vulkan Loader* from the *Vulkan SDK* to run **MoltenVK** on *macOS*, the *Vulkan Loader* 
+will only include **MoltenVK** `VkPhysicalDevices` in the list returned by 
+`vkEnumeratePhysicalDevices()` if the `VK_INSTANCE_CREATE_ENUMERATE_PORTABILITY_BIT_KHR` 
+flag is enabled in `vkCreateInstance()`. See the description of the `VK_KHR_portability_enumeration` 
+extension in the *Vulkan* specification for more information about the use of the 
+`VK_INSTANCE_CREATE_ENUMERATE_PORTABILITY_BIT_KHR` flag.
+
 If you are developing a *Vulkan* application for *iOS* or *tvOS*, or are developing a *Vulkan* 
 application for *macOS* and want to use a different version of the **MoltenVK** runtime library 
 provided in the *macOS Vulkan SDK*, you can use this document to learn how to build a **MoltenVK** 
diff --git a/Templates/spirv-tools/build.zip b/Templates/spirv-tools/build.zip
index 7d28bc7..4759c86 100644
--- a/Templates/spirv-tools/build.zip
+++ b/Templates/spirv-tools/build.zip
Binary files differ