Metal 이란?
Render advanced 3D graphics and compute data in parallel with graphics processors.
-
Metal API는 Apple에서 제공하는 그래픽 및 연산 작업을 위한 저수준 API
-
저수준 API(Low-Level API)
- Metal은 HW와의 소통을 직접적으로 처리할 수 있는 API
- OpenGL 같은 고수준 API(High-Level API)보다 더 세부적인 작업을 제어 가능(ex. GPU 메모리 관리, 렌더링 파이프라인 설정 등을 세밀하게 조작 가능)
- Metal은 HW와의 소통을 직접적으로 처리할 수 있는 API
-
HW 성능 극대화
-
CPU와 GPU의 성능을 최대한으로 활용할 수 있게 설계되었음
-
불필요한 오버헤드(비효율적인 처리)를 줄이고, 게임, vr 등 작업을 최적화하여 하드웨어의 최대 성능을 끌어낼 수 있음
-
-
-
Vulkan과 비슷한 역할을 하지만 iOS, macOS 등 Apple 생태계에 최적화되어 있음
-
3D 그래픽 렌더링뿐 아니라, GPU에서 실행할 병렬 연산 작업도 지원, 게임의 작동 방식을 제어 가능
기본 구성
Metal API를 이해하려면 세 가지 기본 개념이 필요
-
Metal Device (
MTLDevice)protocol MTLDevice : NSObjectProtocol
-
Metal에서 모든 작업의 출발점은
MTLDevice -
객체는 GPU를 추상화한 것으로, GPU 자원을 관리하고 작업을 실행
-
GPU에 직접 연결하는 인터페이스
-
-
Metal Command Queue (
MTLCommandQueue)protocol MTLCommandQueue : NSObjectProtocol
- GPU에게 명령을 전달하는 큐
- 명령이 실행되는 순서를 제어 가능
-
Metal Buffers (
MTLBuffer)protocol MTLBuffer : MTLResource
- GPU와 데이터를 공유하기 위한 메모리
- GPU와 데이터를 공유하기 위한 메모리
렌더링 과정
-
Metal 디바이스, layer 설정, vertex, shader 코딩
var device: MTLDevice! var metalLayer: CAMetalLayer! // ... device = MTLCreateSystemDefaultDevice() metalLayer = CAMetalLayer() metalLayer.device = device metalLayer.pixelFormat = .bgra8Unorm metalLayer.framebufferOnly = true metalLayer.frame = view.layer.frame view.layer.addSublayer(metalLayer) // ...
MTLDevice(Metal 디바이스): GPU와 연결해 작업을 수행할 객체를 설정CAMetalLayer: 화면 출력용 Metal 레이어를 설정해 렌더링 결과를 디스플레이
let vertexData: [Float] = [ 0.0, 0.5, 0.0, -0.5, -0.5, 0.0, 0.5, -0.5, 0.0 ] let defaultLibrary = device.makeDefaultLibrary()! let fragmentProgram = defaultLibrary.makeFunction(name: "basic_fragment") let vertexProgram = defaultLibrary.makeFunction(name: "basic_vertex")
vertex float4 basic_vertex(uint vid [[vertex_id]], constant Vertex* vertices [[buffer(0)]]) { VertexOut out; return out; } fragment float4 basic_fragment(VertexOut in [[stage_in]], constant float4 &color [[buffer(1)]]) { return color; }
- vertex data: 그릴 도형(예: 삼각형, 사각형 등)의 좌표 정보를 정의
- shader: 버텍스(기하학적 변환)와 프래그먼트(픽셀 색상 계산)를 처리하는 GPU 코드 작성
-
파이프라인(Pipeline) 설정
let defaultLibrary = device.makeDefaultLibrary()! let fragmentProgram = defaultLibrary.makeFunction(name: "basic_fragment") let vertexProgram = defaultLibrary.makeFunction(name: "basic_vertex") let pipelineStateDescriptor = MTLRenderPipelineDescriptor() pipelineStateDescriptor.vertexFunction = vertexProgram pipelineStateDescriptor.fragmentFunction = fragmentProgram pipelineStateDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineState = try! device.makeRenderPipelineState(descriptor: pipelineStateDescriptor)
MTLRenderPipelineState렌더링 파이프라인:- 버텍스 셰이더와 프래그먼트 셰이더를 연결하고 렌더링 규칙을 설정
- 어떤 그래픽 출력을 원하는지 GPU가 이해할수 있도록 정의
let renderPassDescriptor = MTLRenderPassDescriptor() renderPassDescriptor.colorAttachments[0].texture = drawable.texture renderPassDescriptor.colorAttachments[0].loadAction = .clear renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 1,green: 1, blue: 1, alpha: 0.5)
- Pipeline은 커맨드 큐가 실행할 때 GPU의 처리 흐름을 결정
-
커맨드 큐 & 입력 버퍼
-
MTLBuffer(입력 버퍼): CPU에서 GPU로 데이터를 전달하는 메모리 공간var vertexBuffer: MTLBuffer! // ... vertexBuffer = device.makeBuffer(bytes: vertexData, length: dataSize, options: []) // ...
- 예 : 버텍스 데이터 , 색상 정보
- 예 : 버텍스 데이터 , 색상 정보
-
MTLCommandQueue(커맨드 큐):let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)! renderEncoder.setRenderPipelineState(pipelineState) renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0) renderEncoder.setFragmentBytes(¤tColor, length: MemoryLayout<SIMD4<Float>>.stride, index: 1) renderEncoder .drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: 3, instanceCount: 1) renderEncoder.endEncoding() commandBuffer.present(drawable) commandBuffer.commit()
- 커맨드 버퍼 안에 명령어를 작성하고 GPU에서 실행
- 예 : drawPrimitives로 삼각형 등 기본 도형을 그리기 , 입력 버퍼와 파이프라인을 연결해 GPU 작업 실행
Metal vs MetalKit
| 기능 | Metal | MetalKit |
|---|---|---|
| Setup | CAMetalLayer 직접 설정 |
MTKView으로 자동 관리 |
| 프레임 Update | CADisplayLink 직접 설정 |
MTKView의 자동 프레임 관리 |
| 텍스처 Load | UIImage -> CGImage -> MTLTexture 직접 변환 |
MTKTextureLoader으로 간편 처리 |
| 3D Model Load | .obj 파일 직접 파싱 후 MTLBuffer 생성 |
MTKMesh를 지원, 자동 변환 |
-
Setup/프레임 Update
// Metal let metalLayer = CAMetalLayer() metalLayer.device = device metalLayer.pixelFormat = .bgra8Unorm metalLayer.framebufferOnly = true view.layer.addSublayer(metalLayer)
MTLDevice,MTLCommandQueue,MTLRenderPipelineState등을 수동으로 구성CADisplayLink또는 타이머를 사용하여 프레임 업데이트
// MetalKit let mtkView = MTKView(frame: view.bounds, device: device) mtkView.delegate = self view.addSubview(mtkView)
MTKView가 View를 업데이트 관리해줌
-
Textures Load
// Metal guard let image = UIImage(named: name)?.cgImage else { return nil }
CGContext를 사용하여 텍스처 데이터를 직접 파싱이 필요
// MetalKit let textureLoader = MTKTextureLoader(device: device) let texture = try textureLoader.newTexture(name: "img", scaleFactor: 1.0, bundle: nil)
textureLoader지원
-
3D Model Load
// Metal let vertexBuffer = device.makeBuffer(bytes: vertices, length: vertices.count * MemoryLayout<Float>.size, options: [])
- obj 같은 모델 파일을 직접 파싱 필요
MTLBuffer를 수동으로 생성한 뒤, 버텍스 데이터를 저장해야함
// MetalKit let asset = MDLAsset(url: url) let mesh = try MTKMesh.newMeshes(asset: asset, device: device)
MDLAsset,MTKMesh지원MDLAsset+MTKMesh를 사용하면 3D 모델을 자동으로 변환해서MTLBuffer로 만들어줌
OpenGL 튜토리얼 metal로 실습
Coordinate Systems
|
Coordinate Systems |
Depth X |
Depth O |
|
3D Rotate 1 |
3D Rotate 2 |
Camera
|
Camera Rotate |
Gesture Rotate |
-
큐브 여러개 build 코드(Swift)
let cubePositions: [simd_float3] = [ simd_float3(-1.0, 1.0, -6.0), // 상좌 simd_float3(0.0, 1.0, 2.5), // 상중앙 simd_float3(1.0, 1.0, -9.0), // 상우 simd_float3(-1.0, 0.5, -8.5), // 중좌 simd_float3(1.0, 0.5, -2.8), // 중우 simd_float3(0.0, 0.0, 0.0), // 중앙 simd_float3(-1.0, -0.5, 3.5), // 하좌 simd_float3(0.0, -0.5, -3.8), // 하중앙 simd_float3(1.0, -0.5, -7.0), // 하우 simd_float3(0.5, 0.0, -9.2) // 중앙 우측 ] for i in cubePositions.indices { var modelMatrix = matrix_identity_float4x4 translate(matrix: &modelMatrix, position: cubePositions[i]) rotate(matrix: &modelMatrix, rotation: rotation + simd_float3(Float(i), Float(i), Float(i))) scale(matrix: &modelMatrix, scale: simd_float3(1.0, 1.0, 1.0)) var modelViewMatrix = viewMatrix * modelMatrix renderEncoder.setVertexBytes(&modelViewMatrix, length: MemoryLayout.stride(ofValue: modelViewMatrix), index: 2) renderEncoder.drawIndexedPrimitives( type: .triangle, indexCount: cubeIndices.count, indexType: .uint16, indexBuffer: indexBuffer, indexBufferOffset: 0 ) }
제스처 코드(Swift)
// MARK: - Camera struct Camera { var position: simd_float3 var zoomLevel: Float var panDelta: simd_float2 } // 생략 // .... // MARK: - handlePanGesture @objc private func handlePanGesture(_ gesture: UIPanGestureRecognizer) { let translation = gesture.translation(in: view) let sensitivity: Float = 0.01 camera.panDelta.x += Float(translation.x) * sensitivity camera.panDelta.y += Float(translation.y) * sensitivity gesture.setTranslation(.zero, in: view) } // handlePanGesture // MARK: - handlePinchGesture @objc private func handlePinchGesture(_ gesture: UIPinchGestureRecognizer) { let zoomSensitivity: Float = 0.05 if gesture.state == .changed { camera.zoomLevel -= Float(gesture.velocity) * zoomSensitivity camera.zoomLevel = max(10.0, min(90.0, camera.zoomLevel)) // 줌 레벨 클램프 } } // handlePinchGesture
Basic Lighting
|
Ambient lighting |
Diffuse lighting |
|
Specular Lighting |
Specular Lighting 32 rotate |
|
Phong |
Phong rotate |
-
Pipeline 2개 이용
// MARK: - setupPipeline private func setupPipeline() { let library = device.makeDefaultLibrary() let vertexFunction = library?.makeFunction(name: "vertex_shader") let fragmentFunction = library?.makeFunction(name: "fragment_shader_main") // 기존 큐브 let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.vertexFunction = vertexFunction pipelineDescriptor.fragmentFunction = fragmentFunction pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptor.depthAttachmentPixelFormat = .depth32Float do { mainPipelineState = try device.makeRenderPipelineState(descriptor: pipelineDescriptor) } catch let error { fatalError("pipeline 생성 실패: \(error)") } // 광원 큐브 let fragmentSubFunction = library?.makeFunction(name: "fragment_shader_sub") let subPipelineDescriptor = MTLRenderPipelineDescriptor() subPipelineDescriptor.vertexFunction = vertexFunction // subPipelineDescriptor.fragmentFunction = fragmentSubFunction subPipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm subPipelineDescriptor.depthAttachmentPixelFormat = .depth32Float do { subPipelineState = try device.makeRenderPipelineState(descriptor: subPipelineDescriptor) } catch let error { fatalError("광원 큐브 pipeline 생성 실패: \(error)") } if let deptSten = setupDepthStencilState() { depthStencilState = deptSten } return } // setupPipeline
-
diffuseLighting
// MARK: - diffuseLighting inline float3 diffuseLighting(float3 normal, float3 lightDir, float3 lightColor) { float diffuseStrength = max(dot(normal, lightDir), 0.0); float3 diffuse = diffuseStrength * lightColor; return diffuse; } // diffuseLighting
specularLighting
// MARK: - specularLighting inline float3 specularLighting(float3 fragPosition, float3 viewPosition, float3 lightDir, float3 normal, float3 lightColor) { float3 viewDir = normalize(viewPosition - fragPosition); float3 reflectDir = reflect(-lightDir, normal); float spec = pow(max(dot(viewDir, reflectDir), 0.0), 64); float3 specular = spec * lightColor; return specular; } // specularLighting
phongLighting
// MARK: - phongLighting inline float3 phongLighting(float3 ambient, float3 fragPosition, float3 lightPosition, float3 viewPosition, float3 normal, float3 lightColor) { float3 lightDir = normalize(lightPosition - fragPosition); return ambient + diffuseLighting(normal, lightDir, lightColor) + specularLighting(fragPosition, viewPosition, lightDir, normal, lightColor); } // phongLighting
Materials
|
Materials 1 |
Materials 2 |
-
Vertex Shader
// MARK: - TransformUniforms struct TransformUniforms { var projectionMatrix: simd_float4x4 var modelMatrix: simd_float4x4 var viewMatrix: simd_float4x4 init(projectionMatrix: simd_float4x4, modelMatrix: simd_float4x4, viewMatrix: simd_float4x4) { self.projectionMatrix = projectionMatrix self.modelMatrix = modelMatrix self.viewMatrix = viewMatrix } // init } // TransformUniforms
월드 좌표계 반영,
normalMatrix추가// MARK: - vertex_shader vertex VertexOut vertex_shader(uint vid [[vertex_id]], constant VertexIn* vertices [[buffer(0)]], constant TransformUniforms& transformUniforms [[buffer(1)]], constant float3x3& normalMatrix [[buffer(2)]]) { VertexOut out; float3 worldPosition = (transformUniforms.modelMatrix * float4(vertices[vid].position, 1.0)).xyz; out.position = transformUniforms.projectionMatrix * transformUniforms.viewMatrix * float4(worldPosition, 1.0); out.normal = normalize(normalMatrix * vertices[vid].normal); out.fragPosition = worldPosition; return out; } // vertex_shader
Fragment Shader
// MARK: - LightUniforms struct LightUniforms { var lightPosition: simd_float3 var cameraPosition: simd_float3 var lightColor: simd_float3 var objectColor: simd_float3 init(lightPosition: simd_float3, cameraPosition: simd_float3, lightColor: simd_float3, objectColor: simd_float3) { self.lightPosition = lightPosition self.cameraPosition = cameraPosition self.lightColor = lightColor self.objectColor = objectColor } // init } // LightUniforms
phongLighting에objectColor반영// MARK: - fragment_shader_main fragment float4 fragment_shader_main(VertexOut in [[stage_in]], constant LightUniforms& lightUniform [[buffer(1)]], constant TransformUniforms& transformUniforms [[buffer(2)]], constant float3& ambient [[buffer(3)]]) { float3 lighting = phongLighting(ambient, in.fragPosition, lightUniform.lightPosition, lightUniform.cameraPosition, in.normal, lightUniform.lightColor); return float4(lighting * lightUniform.objectColor, 1.0);
Light Map
|
Diffuse maps |
Specular maps |
-
Renderer Textures 관련 코드(Swift)
loadTexture
public func loadTexture(_ name: String) throws -> MTLTexture? { guard let image = UIImage(named: name)?.cgImage else { print("\(name) 불러올 수 없음") return nil } // 1 let width = image.width let height = image.height let textureDescriptor = MTLTextureDescriptor() textureDescriptor.pixelFormat = .rgba8Unorm textureDescriptor.width = width textureDescriptor.height = height textureDescriptor.usage = [.shaderRead] // 2 guard let texture = device.makeTexture(descriptor: textureDescriptor) else { print("텍스처 생성 실패") return nil } // 3 let bytesPerPixel = 4 let bytesPerRow = bytesPerPixel _ width let imageData = UnsafeMutablePointer<UInt8>.allocate(capacity: bytesPerRow _ height) defer { imageData.deallocate() } // 4 let colorSpace = CGColorSpaceCreateDeviceRGB() let context = CGContext(data: imageData, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue) context?.draw(image, in: CGRect(x: 0, y: 0, width: width, height: height)) // 5 let region = MTLRegionMake2D(0, 0, width, height) texture.replace(region: region, mipmapLevel: 0, withBytes: imageData, bytesPerRow: bytesPerRow) // 6 return texture } // loadTexture
-
UIImage로 이미지를 로드 후,cgImage로 객체로 얻음 -
width,height값을 가져오고MTLTextureDescriptor를 생성하고 텍스처의 속성을 설정pixelFormat = .rgba8Unorm: 8비트 RGBA 형식의 픽셀 데이터를 사용usage = [.shaderRead]: 셰이더에서 읽기 전용으로 사용할 텍스처임을 지정
-
device.makeTexture(descriptor: textureDescriptor)Metal 텍스처 생성 -
이미지 데이터를 메모리에 로드
- 픽셀당 4바이트(RGBA)를 사용하므로
bytesPerPixel = 4로 설정 bytesPerRow = 4 * width를 계산하여 한 줄당 필요한 바이트 수를 구함UnsafeMutablePointer<UInt8>를 사용하여imageData버퍼를 할당defer를 사용하여 함수 종료 시imageData.deallocate()로 메모리를 해제
- 픽셀당 4바이트(RGBA)를 사용하므로
-
CGContext를 사용해 이미지 데이터 복사CGColorSpaceCreateDeviceRGB()를 사용하여 RGB 색 공간을 생성CGContext를 생성하여imageData에 이미지 데이터를 저장할 준비context?.draw(image, in: CGRect(x: 0, y: 0, width: width, height: height))를 호출하여 이미지를imageData버퍼에 넣어 그림
-
Metal 텍스처에 이미지 데이터 복사
MTLRegionMake2D(0, 0, width, height)를 사용해 텍스처의 크기를 지정texture.replace(region:mipmapLevel:withBytes:bytesPerRow:)를 호출하여imageData를 Metal 텍스처로 복사
-
-
Fragment shader
// MARK: - fragment_shader_main fragment float4 fragment_shader_main(VertexOut in [[stage_in]], texture2d<float> diffTex [[texture(0)]], texture2d<float> specTex [[texture(1)]], sampler sam [[sampler(0)]], constant LightUniforms& lightUniform [[buffer(1)]], constant TransformUniforms& transformUniforms [[buffer(2)]]) { float3 diffuseTextureColor = diffTex.sample(sam, in.texCoord).rgb; float3 specularTextureColor = specTex.sample(sam, in.texCoord).rgb; float3 lightDir = normalize(lightUniform.lightPosition - in.fragPosition); float3 ambient = lightUniform.ambient * diffuseTextureColor; float diff = max(dot(in.normal, lightDir), 0.0); float3 diffuse = lightUniform.diffuse * diff * diffuseTextureColor; float3 viewDir = normalize(lightUniform.cameraPosition - in.fragPosition); float3 reflectDir = reflect(-lightDir, in.normal); float spec = pow(max(dot(viewDir, reflectDir), 0.0), 64.0); float3 specular = lightUniform.specular * spec * specularTextureColor; float3 lighting = ambient + diffuse + specular; return float4(lighting, 1.0); } // fragment_shader_main
Light Casters
|
Directional Light |
Point lights |
Spotlight |
-
Flashlight
// Flashlight // MARK: - fragment_shader_Flashlight fragment float4 fragment_shader_Flashlight(VertexOut in [[stage_in]], texture2d<float> diffTex [[texture(0)]], texture2d<float> specTex [[texture(1)]], constant TransformUniforms& transformUniforms [[buffer(1)]], constant LightUniforms& lightUniforms [[buffer(2)]], constant float3& cameraPosition [[buffer(3)]]) { constexpr sampler sam(mip_filter::linear, mag_filter::linear, min_filter::linear, address::repeat); float3 result; float3 diffuseTextureColor = diffTex.sample(sam, in.texCoord).rgb; float3 specularTextureColor = specTex.sample(sam, in.texCoord).rgb; float3 lightDir = normalize(lightUniforms.position - in.fragPosition); float theta = dot(lightDir, normalize(-lightUniforms.direction)); float3 ambient = lightUniforms.ambient * diffuseTextureColor; if (theta > lightUniforms.cutOff.x) { float diff = max(dot(in.normal, lightDir), 0.0); float3 diffuse = lightUniforms.diffuse * diff * diffuseTextureColor; float3 viewDir = normalize(cameraPosition - in.fragPosition); float3 reflectDir = reflect(-lightDir, in.normal); float spec = pow(max(dot(viewDir, reflectDir), 0.0), 64.0); float3 specular = lightUniforms.specular * spec * specularTextureColor; float dist = length(lightUniforms.position - in.fragPosition); float attenuation = 1.0 / (lightUniforms.constants.x + lightUniforms.linears.x * dist + lightUniforms.quadratics.x * (dist * dist)); diffuse *= attenuation; specular *= attenuation; result = ambient + diffuse + specular; } else { result = ambient; } return float4(result, 1.0); } // fragment_shader_Flashlight
Spotlight
float theta = dot(lightDir, normalize(-lightUniforms.direction)); float epsilon = lightUniforms.cutOff.x - lightUniforms.outerCutOff.x; float intensity = clamp((theta - lightUniforms.outerCutOff.x) / epsilon, 0.0, 1.0); diffuse _= intensity; specular _= intensity; float dist = length(lightUniforms.position - in.fragPosition); float attenuation = 1.0 / (lightUniforms.constants.x + lightUniforms.linears.x _ dist + lightUniforms.quadratics.x _ (dist \* dist)); ambient _= attenuation; diffuse _= attenuation; specular \*= attenuation;
Multiple Lights
|
Multiple Lights 1 |
Multiple Lights 2 |
-
fragment_shader
// MARK: - fragment_shader_main fragment float4 fragment_shader_main(VertexOut in [[stage_in]], texture2d<float> diffTex [[texture(0)]], texture2d<float> specTex [[texture(1)]], sampler sam [[sampler(0)]], constant TransformUniforms& transformUniforms [[buffer(1)]], constant SpotLight& spotLight [[buffer(2)]], constant DirLight& dirLight [[buffer(3)]], constant PointLights& pointLights [[buffer(4)]], constant float3& cameraPosition [[buffer(5)]]) { float3 diffuseTextureColor = diffTex.sample(sam, in.texCoord).rgb; float3 specularTextureColor = specTex.sample(sam, in.texCoord).rgb; float3 viewDir = normalize(cameraPosition - in.fragPosition); float3 result = calcDirLight(dirLight, in.normal, viewDir, diffuseTextureColor, specularTextureColor); for (uint32_t i = 0; i < 4; ++i) { result += calcPointLight(pointLights.pointLightArr[i], in.normal, in.fragPosition, viewDir, diffuseTextureColor, specularTextureColor); } result += calcSpotLight(spotLight, in.normal, in.fragPosition, viewDir, diffuseTextureColor, specularTextureColor); return float4(result, 1.0); } // fragment_shader_main
calcDirLight
// MARK: - calcDirLight inline float3 calcDirLight(DirLight light, float3 norm, float3 viewDir, float3 diffuseTextureColor, float3 specularTextureColor) { float3 lightDir = normalize(-light.direction); float diff = max(dot(norm, lightDir), 0.0); float3 reflectDir = reflect(-lightDir, norm); float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32.0); float3 ambient = light.ambient * diffuseTextureColor; float3 diffuse = light.diffuse * diff * diffuseTextureColor; float3 specular = light.specular * spec * specularTextureColor; return (ambient + diffuse + specular); } // CalcDirLight
calcPointLight
// MARK: - calcPointLight inline float3 calcPointLight(PointLight light, float3 norm, float3 fragPos, float3 viewDir, float3 diffuseTextureColor, float3 specularTextureColor) { float3 lightDir = normalize(light.position - fragPos); float diff = max(dot(norm, lightDir), 0.0); float3 reflectDir = reflect(-lightDir, norm); float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32.0); float dist = length(light.position - fragPos); float attenuation = 1.0 / (light.constants.x + light.linears.x * dist + light.quadratics.x * (dist * dist)); float3 ambient = light.ambient * diffuseTextureColor; float3 diffuse = light.diffuse * diff * diffuseTextureColor; float3 specular = light.specular * spec * specularTextureColor; ambient *= attenuation; diffuse *= attenuation; specular *= attenuation; return (ambient + diffuse + specular); } // calcPointLight
calcSpotLight
// MARK: - calcSpotLight inline float3 calcSpotLight(SpotLight light, float3 norm, float3 fragPos, float3 viewDir, float3 diffuseTextureColor, float3 specularTextureColor) { float3 lightDir = normalize(light.position - fragPos); float diff = max(dot(norm, lightDir), 0.0); float3 reflectDir = reflect(-lightDir, norm); float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32.0); float dist = length(light.position - fragPos); float attenuation = 1.0 / (light.constants.x + light.linears.x * dist + light.quadratics.x * (dist * dist)); float theta = dot(lightDir, normalize(-light.direction)); float epsilon = light.cutOff.x - light.outerCutOff.x; float intensity = clamp((theta - light.outerCutOff.x) / epsilon, 0.0, 1.0); float3 ambient = light.ambient * diffuseTextureColor; float3 diffuse = light.diffuse * diff * diffuseTextureColor; float3 specular = light.specular * spec * specularTextureColor; ambient *= (attenuation * intensity); diffuse *= (attenuation * intensity); specular *= (attenuation * intensity); return (ambient + diffuse + specular); } // calcSpotLight
Mesh
// MARK: - Material
struct Material {
var textures: [MTLTexture?] = Array(repeating: nil, count: MaterialIndex.allCases.count)
static private var textureMap: [MDLTexture?: MTLTexture?] = [:]
// MARK: - init
init(mdlMaterial: MDLMaterial?, textureLoader: MTKTextureLoader) {
MaterialIndex.allCases.forEach { index in
textures[index.rawValue] = loadTexture(index.semantic, mdlMaterial: mdlMaterial, textureLoader: textureLoader)
} // forEach
} // init
// MARK: - loadTexture
private func loadTexture(_ semantic: MDLMaterialSemantic,
mdlMaterial: MDLMaterial?,
textureLoader: MTKTextureLoader) -> MTLTexture? {
guard let materialProperty = mdlMaterial?.property(with: semantic) else { return nil }
guard let sourceTexture = materialProperty.textureSamplerValue?.texture else { return nil }
if let texture = Material.textureMap[sourceTexture] {
return texture
}
let texture = try? textureLoader.newTexture(texture: sourceTexture, options: nil)
Material.textureMap[sourceTexture] = texture
return texture
} // loadTexture
} // Material// MARK: - Mesh
class Mesh {
private var mesh: MTKMesh
private var materials: [Material]
// MARK: - init
init(mesh: MTKMesh, materials: [Material]) {
self.mesh = mesh
self.materials = materials
} // init
// MARK: - draw
func draw(renderEncoder: MTLRenderCommandEncoder) {
guard let vertexBuffer = mesh.vertexBuffers.first else {
return
}
renderEncoder.setVertexBuffer(vertexBuffer.buffer,
offset: vertexBuffer.offset,
index: VertexBufferIndex.attributes.rawValue)
for (submesh, material) in zip(mesh.submeshes, materials) {
MaterialIndex.allCases.forEach { index in
renderEncoder.setFragmentTexture(material.textures[index.rawValue], index: index.rawValue)
} // forEach
var stateUniform = MaterialStateUniform(textures: material.textures)
renderEncoder.setFragmentBytes(&stateUniform,
length: MemoryLayout<MaterialStateUniform>.size,
index: FragmentBufferIndex.materialStateUniform.rawValue)
// Draw
renderEncoder.drawIndexedPrimitives(type: MTLPrimitiveType.triangle,
indexCount: submesh.indexCount,
indexType: submesh.indexType,
indexBuffer: submesh.indexBuffer.buffer,
indexBufferOffset: submesh.indexBuffer.offset)
} // for
} // draw
} // Mesh// MARK: - Model
class Model {
// Model property
private var meshes: [Mesh] = []
// property
private let position: simd_float3 = simd_float3(repeating: 0.0)
private let angle: Float = 30.0
private let axis: simd_float3 = simd_float3(0.0, 1.0, 0.0)
private let scales: simd_float3 = simd_float3(repeating: 0.4)
// MARK: - init
init(device: MTLDevice,
url: URL,
vertexDescriptor: MTLVertexDescriptor,
textureLoader: MTKTextureLoader) {
loadModel(device: device, url: url, vertexDescriptor: vertexDescriptor, textureLoader: textureLoader)
} // init
// MARK: - draw
func draw(renderEncoder: MTLRenderCommandEncoder) {
var modelUniform = ModelUniform(position: self.position,
angle: self.angle,
axis: self.axis,
scales: self.scales)
renderEncoder.setVertexBytes(&modelUniform, length: MemoryLayout<ModelUniform>.size, index: VertexBufferIndex.modelUniform.rawValue)
for mesh in self.meshes {
mesh.draw(renderEncoder: renderEncoder)
} // for
} // draw
// MARK: - Private
// ...
// MARK: - loadModel
private func loadModel(device: MTLDevice, url: URL,
vertexDescriptor: MTLVertexDescriptor, textureLoader: MTKTextureLoader) {
let modelVertexDescriptor = VertexDescriptorManager.buildMDLVertexDescriptor(vertexDescriptor: vertexDescriptor)
let bufferAllocator = MTKMeshBufferAllocator(device: device)
let asset = MDLAsset(url: url, vertexDescriptor: modelVertexDescriptor, bufferAllocator: bufferAllocator)
asset.loadTextures()
guard let (mdlMeshes, mtkMeshes) = try? MTKMesh.newMeshes(asset: asset, device: device) else {
print("meshes 생성 실패")
return
}
self.meshes.reserveCapacity(mdlMeshes.count)
for (mdlMesh, mtkMesh) in zip(mdlMeshes, mtkMeshes) {
mdlMesh.addOrthTanBasis(forTextureCoordinateAttributeNamed: MDLVertexAttributeTextureCoordinate,
normalAttributeNamed: MDLVertexAttributeNormal,
tangentAttributeNamed: MDLVertexAttributeTangent)
let mesh = processMesh(mdlMesh: mdlMesh, mtkMesh: mtkMesh, textureLoader: textureLoader)
self.meshes.append(mesh)
} // for
} // loadModel
// MARK: - processMesh
private func processMesh(mdlMesh: MDLMesh, mtkMesh: MTKMesh, textureLoader: MTKTextureLoader) -> Mesh {
var materials: [Material] = []
for mdlSubmesh in mdlMesh.submeshes as! [MDLSubmesh] {
let material = Material(mdlMaterial: mdlSubmesh.material, textureLoader: textureLoader)
materials.append(material)
} // for
return Mesh(mesh: mtkMesh, materials: materials)
} // processMesh
} // ModelModel
Survival BackPack
|
Model 1 |
Model 2 |
Model 3 |
|
조명 적용 |
Normal map |
Roughness + AO |
-
Vertex Shader
vertex VertexOut vertexFunction(VertexIn in [[stage_in]], constant ViewUniform& viewUniform [[buffer(vertexBufferIndexView)]], constant ModelUniform& modelUniform [[buffer(vertexBufferIndexModel)]]) { VertexOut out; out.worldPosition = (modelUniform.modelMatrix * float4(in.position, 1.0)).xyz; out.position = viewUniform.projectionMatrix * viewUniform.viewMatrix * float4(out.worldPosition, 1.0); out.texCoord = in.texCoord; out.normal = in.normal; float3 T = normalize(modelUniform.normalMatrix * in.tangent.xyz); float3 N = normalize(modelUniform.normalMatrix * in.normal); T = normalize(T - dot(T, N) * N); float3 B = cross(N, T) * in.tangent.w; out.T = T; out.B = B; out.N = N; return out; } // vertexFunction
Fragment Shader
fragment float4 fragmentFunction(VertexOut in [[stage_in]], texture2d<float> diffuseTexture [[texture(textureIndexDiffuse)]], texture2d<float> specularTexture [[texture(textureIndexSpecular)]], texture2d<float> normalTexture [[texture(textureIndexNormal)]], texture2d<float> roughnessTexture [[texture(textureIndexRoughness)]], texture2d<float> aoTexture [[texture(textureIndexAo)]], constant LightUniform& lightUniform [[buffer(fragmentBufferIndexLight)]], constant MaterialStateUniform& stateUniform [[buffer(fragmentBufferIndexMaterialState)]]) { constexpr sampler colorSampler(mip_filter::linear, mag_filter::linear, min_filter::linear, address::repeat); float4 diffuseColor = (stateUniform.hasDiffuseTexture ? diffuseTexture.sample(colorSampler, in.texCoord) : float4(1.0)); float4 specularColor = (stateUniform.hasSpecularTexture ? specularTexture.sample(colorSampler, in.texCoord) : float4(1.0)); float4 normalColor = (stateUniform.hasNormalTexture ? normalTexture.sample(colorSampler, in.texCoord) : float4(1.0)); float roughnessColor = (stateUniform.hasRoughnessTexture ? roughnessTexture.sample(colorSampler, in.texCoord) : float4(1.0)).r; float aoColor = (stateUniform.hasAoTexture ? aoTexture.sample(colorSampler, in.texCoord) : float4(1.0)).r; return applyNormalmaps(lightUniform, diffuseColor, specularColor, normalColor, float3x3(in.T, in.B, in.N), in.worldPosition, roughnessColor, aoColor); } // fragmentFunction
Sponza
|
Sponza 1 |
Sponza 2 |
Sponza 조명 X |
|
Sponza 조명 1 |
Sponza 조명 2 |
Sponza 조명 3 |
-
Camera
// MARK: - Camera class Camera { var position: simd_float3 var front: simd_float3 var up: simd_float3 var right: simd_float3 var worldUp: simd_float3 var yaw: Float var pitch: Float var movementSpeed: Float = 3.0 var mouseSensitivity: Float = 1.0 var zoom: Float = 45.0 // MARK: - init init(position: simd_float3, up: simd_float3 = simd_float3(0, 1, 0), yaw: Float = -90.0, pitch: Float = 0.0) { self.position = position self.worldUp = up self.yaw = yaw self.pitch = pitch self.front = simd_float3(0, 0, -1) self.right = simd_float3(1, 0, 0) self.up = up updateCameraVectors() } // init // MARK: - Public // ... // MARK: - processKeyboard // 키보드 입력 처리 (WASD 이동) func processKeyboard(_ direction: CameraMovement, deltaTime: Float) { let velocity = self.movementSpeed * deltaTime switch direction { case .forward: self.position += self.front * velocity case .backward: self.position -= self.front * velocity case .left: self.position -= self.right * velocity case .right: self.position += self.right * velocity } } // processKeyboard // MARK: - processMouseMovement // 마우스 이동 처리 (카메라 회전) func processMouseMovement(xOffset: Float, yOffset: Float, constrainPitch: Bool = true) { let xOffset = xOffset * self.mouseSensitivity let yOffset = yOffset * self.mouseSensitivity self.yaw += xOffset self.pitch += yOffset if constrainPitch { self.pitch = max(-89.0, min(89.0, self.pitch)) } updateCameraVectors() } // processMouseMovement // MARK: - processMouseScroll // 줌 조절 func processMouseScroll(yOffset: Float) { self.zoom -= yOffset self.zoom = max(1.0, min(45.0, self.zoom)) } // processMouseScroll // MARK: - getViewMatrix func getViewMatrix(eyePosition: simd_float3? = nil) -> simd_float4x4 { if let pos = eyePosition { return simd_float4x4.identity().lookAt(eyePosition: pos, targetPosition: simd_float3(repeating: 0.0), upVec: simd_float3(0.0, 1.0, 0.0)) } return simd_float4x4.identity().lookAt(eyePosition: self.position, targetPosition: self.position + self.front, upVec: self.up) } // getViewMatrix // MARK: - getProjectionMatrix func getProjectionMatrix() -> simd_float4x4 { return simd_float4x4.identity().perspective(fov: Float(45).toRadians(), aspectRatio: 1.0, nearPlane: 0.1, farPlane: 100.0) } // getProjectionMatrix // MARK: - Private // ... // MARK: - updateCameraVectors // 카메라 벡터 업데이트 private func updateCameraVectors() { let yawRad = self.yaw.toRadians() let pitchRad = self.pitch.toRadians() let frontX = cos(yawRad) * cos(pitchRad) let frontY = sin(pitchRad) let frontZ = sin(yawRad) * cos(pitchRad) self.front = normalize(simd_float3(frontX, frontY, frontZ)) self.right = normalize(cross(self.front, self.worldUp)) self.up = normalize(cross(self.right, self.front)) } // updateCameraVectors } // Camera // MARK: - CameraMovement enum CameraMovement { case forward, backward, left, right } // CameraMovement
-
Model Pass
// MARK: - ModelPass class ModelPass { // Propertys var vertexDescriptor: MTLVertexDescriptor private var renderPipelineState: MTLRenderPipelineState? private var shadowSampler: MTLSamplerState? private let lightDir : simd_float3 = simd_float3(0.436436, -0.572872, 0.218218) // MARK: - init init(device: MTLDevice, mkView: MTKView, vertexFunction: String, fragmentFunction: String) { self.vertexDescriptor = DescriptorManager.buildVertexDescriptor(attributeLength: 4) self.renderPipelineState = DescriptorManager.buildPipelineDescriptor(device: device, metalKitView: mkView, vertexDescriptor: self.vertexDescriptor, vertexFunctionName: vertexFunction, fragmentFunctionName: fragmentFunction) self.shadowSampler = DescriptorManager.buildSamplerDescriptor(device: device, minFilter: .linear, magFilter: .linear, compareFunction: .less) } // init // MARK: - encode func encode(commandBuffer: MTLCommandBuffer, mkView: MTKView, depthStencilState: MTLDepthStencilState?, render: (MTLRenderCommandEncoder) -> Void, camera: inout Camera, shadowMap: MTLTexture?) { let renderPassDescriptor = DescriptorManager.buildMTLRenderPassDescriptor(view: mkView, r: 0.416, g: 0.636, b: 0.722, alpha: 1.0) let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)! renderEncoder.setRenderPipelineState(self.renderPipelineState!) renderEncoder.setDepthStencilState(depthStencilState) var viewUniform = ViewUniform(viewMatrix: camera.getViewMatrix(), projectionMatrix: camera.getProjectionMatrix()) renderEncoder.setVertexBytes(&viewUniform, length: MemoryLayout<ViewUniform>.size, index: VertexBufferIndex.viewUniform.rawValue) var lightUniform = LightUniform(viewMatrix: camera.getViewMatrix(eyePosition: lightDir), projectionMatrix: simd_float4x4.identity().orthographicProjection(l: -10.0, r: 10.0, bottom: -10.0, top: 10.0, zNear: -25.0, zFar: 25.0)) renderEncoder.setFragmentBytes(&lightUniform, length: MemoryLayout<LightUniform>.size, index: FragmentBufferIndex.lightUniform.rawValue) renderEncoder.setFragmentBytes(&camera.position, length: MemoryLayout<simd_float3>.size, index: FragmentBufferIndex.cameraPosition.rawValue) renderEncoder.setFragmentTexture(shadowMap!, index: 0) renderEncoder.setFragmentSamplerState(self.shadowSampler, index: 0) render(renderEncoder) renderEncoder.endEncoding() } // encode } // ModelPass
Shadow Pass
현재 제대로 Shadow Map 생성되지 않음
// MARK: - ShadowPass class ShadowPass { // Propertys var shadowMap: MTLTexture? private var vertexDescriptor: MTLVertexDescriptor private var renderPipelineState: MTLRenderPipelineState? private let lightDir : simd_float3 = simd_float3(0.436436, -0.572872, 0.218218) // MARK: - init init(device: MTLDevice, mkView: MTKView, vertexFunction: String, fragmentFunction: String) { self.vertexDescriptor = DescriptorManager.buildVertexDescriptor(attributeLength: 1) self.shadowMap = DescriptorManager.buildMTLTextureDescriptor(device: device) self.renderPipelineState = DescriptorManager.buildShadowPipelineDescriptor(device: device, shadowMap: self.shadowMap, vertexDescriptor: self.vertexDescriptor, vertexFunctionName: vertexFunction, fragmentFunctionName: fragmentFunction) } // init // MARK: - encode func encode(commandBuffer: MTLCommandBuffer, mkView: MTKView, depthStencilState: MTLDepthStencilState?, render: (MTLRenderCommandEncoder) -> Void, camera: Camera) { let renderPassDescriptor = MTLRenderPassDescriptor() renderPassDescriptor.depthAttachment.texture = self.shadowMap renderPassDescriptor.depthAttachment.loadAction = .clear renderPassDescriptor.depthAttachment.storeAction = .store renderPassDescriptor.depthAttachment.clearDepth = 1.0 let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)! renderEncoder.setRenderPipelineState(self.renderPipelineState!) renderEncoder.setDepthStencilState(depthStencilState) var lightUniform = LightUniform(viewMatrix: camera.getViewMatrix(eyePosition: lightDir), projectionMatrix: simd_float4x4.identity().orthographicProjection(l: -10.0, r: 10.0, bottom: -10.0, top: 10.0, zNear: -25.0, zFar: 25.0)) renderEncoder.setVertexBytes(&lightUniform, length: MemoryLayout<LightUniform>.size, index: VertexBufferIndex.viewUniform.rawValue) render(renderEncoder) renderEncoder.endEncoding() } // encode } // ShadowPass


















































