-]]>
-
- 数学 - 图形学
-
-
- 随笔
- 计算机
- 算法
-
- Customizing Actor's Details Panel and Accessing its CDO Components/2023/03/02/10/53/
@@ -1638,2008 +1638,1464 @@ Verlet and adding Vetlet damping
- The Jittering Issue with Damping in Cinemachine and How to Tackle it
- /2023/07/08/18/22/
- If you are familiar with Cinemachine. you probably know there is a
-knotty problem with Cinemachine' damping if you are using
-Framing Transposer or some other components to track a
-follow point. That is, the camera jitters with damping enabled under
-unstable frame rate. The more unstable frame rate is, the more heavily
-camera will jitter. This post will discuss this phenomenon and proposes
-a workaround to solve this issue.
+ Midjourney和NovelAI不完全使用指南
+ /2022/10/22/17/37/
+ 显然,AI绘画已经发展到了一个新的阶段。尽管现在市面上已经有不少AI绘画工具,但要熟练地掌握使用它们也绝非易事。本文从实践出发,为读者讲解如何更好地使用Midjourney和NovelAI这两个时下火热的AI绘画工具,但由于笔者使用时间较短,无法完全驾驭AI绘画工具,因此本文是“不完全指南”。如果本文有任意谬误或缺漏,希望读者能不吝指出。
-
Camera jitters with
-damping in Cinemachine
-
Unity's Cinemachine has a notoriously severe problem that may cause
-the follow object to seemingly jitter when you are using the
-Framing Transposer component with damping enabled.
-
To show this, I did a simple experiment. I created a new blank scene
-and spawned a new attached with the following script:
if (elapsedTime >= 5.0f) { if (currentSpeed > 0.0f) { currentSpeed = 0.0f; } else { currentSpeed = speed; }
elapsedTime = 0.0f; }
transform.position += new Vector3(1, 0, 0) * currentSpeed * Time.deltaTime;
} }
-
This script moves the cube for 5 seconds and then keeps it steady for
-another 5 seconds and continues moving. The move speed as
-well as the fps (frames per second) can be set for test
-under different conditions.
-
A new virtual camera is then created with a
-Framing Transposer component following this cube. A default
-damping of 0.2 is used.
-
Here is result with speed is 100 and fps is
-0 (when set to 0, the real fps is determined by Unity, may but
-unstable).
-
-
The jitters are very clear. You can also notice that the frame rate
-(presented in the Statistics panel) is very unstable, and we will know
-soon it is the unstable fps that results in camera jitters.
-
Cinemachine proposes a workaround to alleviate this problem, that is,
-to use the revised version of damping where they sub-divide each frame
-and simulates damping in the consecutive series of sub-frames. To enable
-this functionality, go to Edit -> Project Settings -> Player ->
-Script Compilation and add the
-CINEMACHINE_EXPERIMENTAL_DAMPING marco to it.
-
-
OKay, now we have enabled the new damping algorithm and let's see how
-it will mitigate the jittering issue. Here is result with the same
-setting we used in our previous experiment, i.e., speed is
-100 and fps is 0.
-
-
It is astonishing to see the jittering issue becomes even more
-severe. I conjecture that the variance of fps will significantly amplify
-camera jitters when this feature is enabled. In other words, the
-experimental damping algorithm responds to the variance of fps in a
-NON-linear way: when the variance is small, the experiment damping will
-reduce the gaps of camera location between contiguous frames; but when
-the variance is large, it will enlarge the gaps, leading to unacceptable
-jittering. (Note: I did not validate this conjecture. If you are
-interested, just review the code and test it yourself.)
-
What about the expected result if fps is stable? Let's take more
-experiments!
-
Here is result with speed is 100 and fps is
-120 (very high fps, which is usually prohibitive in shipped games).
-
-
Very steady camera! What about setting fps to 60? Here
-is the result.
-
-
An fps of 60 performs equally well with 120, which is anticipated as
-fps is stable. Okay, let's try a final experiment where fps is set at an
-extreme value of 20.
-
-
Even a low fps of 20 makes our camera stable, only if fps itself is
-stable.
-
Now we can conclude that it is the instability of fps that induces
-camera jitters, regardless of the exact value of fps. But, why?
-
Why camera jitters
-
Before answering this question, let us first take a look at the
-source of damping implemented in Cinemachine.
#if CINEMACHINE_EXPERIMENTAL_DAMPING // Try to reduce damage caused by frametime variability float step = Time.fixedDeltaTime; if (deltaTime != step) step /= 5; int numSteps = Mathf.FloorToInt(deltaTime / step); float vel = initial * step / deltaTime; float decayConstant = Mathf.Exp(-k * step); float r = 0; for (int i = 0; i < numSteps; ++i) r = (r + vel) * decayConstant; float d = deltaTime - (step * numSteps); if (d > Epsilon) r = Mathf.Lerp(r, (r + vel) * decayConstant, d / step); return initial - r; #else return initial * (1 - Mathf.Exp(-k * deltaTime)); #endif }
-
Translating into mathematics, we have:
-
-
where is the damp time
-parameter and the elapsed
-time in this frame. This equation decays the input , the distance for the
-camera to go to the desired position, by an exponential factor . If , the residual will be , meaning that at
-this frame, the camera will traverse 99% of the desired distance to go,
-only remaining 1% amount for future frames.
-
OK, let's assume we've placed a cube in the origin and it moves along
-the x-axis at a fixed speed, say,
-m/s. A camera is placed to track the cube with damping where damp time
-. Let's further denote the
-delta time for each frame by , where is the -th
-frame.
-
Having all variables fully prepared, we can then simulate the object
-movement and camera track process.
-
In the beginning of 0-th frame, the camera and the cube are both at
-the origin, i.e., (0, 0, 0). As the cube only moves along x-axis, we can
-emit the y and z dimensions and use a one-dimensional coordiante to
-represent cube and camera positions.
-
At the 1-th frame, the cube moves to , the
-distance the camera traverses is , and the residual is . We set for simplicity.
-
At the 2-th frame, the cube moves to , the distance the camera traverses is
-, and the residual is .
-
At the k-th frame, we have ,
-, and
-.
-
Without loss of generality, we can set . The following sections will use this
-settings unless otherwise stated.
-
For different combinations of , may have different results. Let's dive into and see how it influences
-the results.
-
Case 1: Stable FPS,
-all are equal
-
When all are
-equal, say , our equations
-reduce to:
-
-
apparently has a
-limitation of when
-since . This
-explains why a camera with damping always has a maximum distance to its
-following target. There maximum distance, also the supremum, is exactly
-. When is larger, will be larger, implying the
-maximum distance between the camera and its following target will be
-larger.
-
What if is
-mutable? In this case, we can assume there exists an upper bound such that all satisty . Then we are
-able to derive the same conclusion.
-
Another question is, why camera does not jitter when FPS is stable?
-We turn to examine the sign of :
-
-
Therefore, when FPS is stable, is always larger than , and jitter will never
-happen.
-
Case 2: Unstable FPS, vary
-
When FPS is unstable, where may mutate, how will the camera move in response to its
-following target? We can still examine the sign of , but in another
-way:
-
-
This equation uncovers why camera jitters happen with unstable FPS.
-The residual at the k-th frame is essentially an interpolation
-between the following target's current position increment and the last frame's
-negative residual , where
-the interpolation strength is the decaying factor .
-As both and are fixed, a change in will incline the resulting
-residual to different ends,
-either or .
-
In our simplified case in which the target moves at a fixed speed in
-the direction of x-axis, will always be positive (though its magnitude can vary) and
- will always be negative.
-A mutating thus has
-a chance to alter the sign of , which further brings
-about camera jitters.
-
So when will camera jitter? From the above equation, we know that
-camera will jitter when the sign of consistently changes
-over time, i.e., the value of oscillates around zero.
-Let's make it equal to zero and see what we can find then.
-
-
This equation tells us when is near , camera will have a large chance to jitter. This
-motivates us to improve damping by filtering out the occasions where
- is very close to
-.
-
What about going deeper? We can treat as variable, and all other
-as constants. This abstraction gives us a function of :
-
-
Taking the derivative of ,
-we know that is monotonically
-decreasing when and monotonically increasing when , and . Hence, to make the sign of mutable, must be positive and the
-minimum of must be
-negative.
-
The minimum of can be
-easily computed:
-
-
The last inequality holds because .
-
-
This reveals the fact that: when , a variant is likely to cause to change its sign,
-thus resulting in camera jitters. Suppose is large enough, so then
-the k-th residual gets
-smaller than while is positive. A smaller pushes to become smaller for the next frame,
-which further pushes the root of the function to become larger. In this
-case, even with the same delta time, will have a larger chance wo
-fall in the negative area, i.e., is more likely to be less than the root.
-
-
Solutions
-
Solution 1: imposing an
-invalid range
-
Based on what we've discussed so far, we can immediately come up with
-a simple solution: enforce to be if they are very close. That is to say, we use a
-small value , if , we just set to .
-
Note that can be zero or negative. If this is the case, we keep the
-original without
-doing anything. Besides, you should be aware that here is not the time this
-frame actually takes, instead, it is just the duration used to calculate
-damping.
-
Let us explain it more quantitatively. Suppose ,
-where .
-Then according to our algorithm. We then
-plug into the original
-expression of :
-
-
This demonstrates that now the camera lags behind its following
-target more than the previous frame since the residual is larger. After
-substituting
-with , would be zero, meaning
-that the camera now keeps the same frame as last frame. Camera does not
-jitter.
-
Here comes the question: what if the following target slows down, or
-stops, or even turns back to the opposite direction and the camera still
-remains the same residual to it?
-
It is quite a good question. But if we look carefully at the function
-of , we will find this
-situation will never happen. Let's rewrite here:
-
-
This time, we do not constrain the value of , but at last frame, it's positive.
-
When gets smaller but still
-positive, we observe the function gradually shifts leftwards, pushing
-the root towards zero. This implies that the area gets
-contracted and the probability of remaining the same residual gets
-smaller.
-
-
When is zero where the
-following target stops, the current residual can be readily calculated
-as , which closes the distance gap between the camera
-of the following target. The ratio, which is calculated as , would be
-devided by zero, outputting an infinite value.
-
When is negative, will be negative. The
-ratio
-now becomes negative, also beyond the range of .
-
We can implement this algorithm in less than 100 lines of code. You
-should modify three files in the official Cinemachine source code
-directory.
-
First is Predictor.cs. Add a ImprovedDamp
-function:
The input bonus is . Parameter tolerance is what you should set
-as we've introduced
-above.
-
In file CinemachineVirtualCameraBase.cs, add a new
-function ImprovedDetachedFollowTargetDamp:
-
public Vector3 ImprovedDetachedFollowTargetDamp(Vector3 initial, Vector3 dampTime, float deltaTime) { GameObject go = GameObject.Find("Cube"); // Hard find our following target of interest, you should not do like this! Vector3 deltaDistance = new Vector3(100, 0, 0) * deltaTime; // Hard set the velocity, you should not do like this! Vector3 residual = initial - deltaDistance; Vector3 bonus = new Vector3(residual.x / (residual.x + deltaDistance.x + 1e-7f), residual.y / (residual.y + deltaDistance.y + 1e-7f), residual.z / (residual.z + deltaDistance.z + 1e-7f));
This piece of code is very informal, and you should never write your
-code like this. The purpose of this function is to get and . I reckon the correct
-way to do this is to create a new (or two) variable in the
-CinemachineVirtualCameraBase class and update it in each
-tick. The code presented here is only for demonstration.
-
In file CinemachineFramingTransposer.cs, change the
-called function for damping:
-
cameraOffset = VirtualCamera.ImprovedDetachedFollowTargetDamp( // Original is DetachedFollowTargetDamp cameraOffset, new Vector3(m_XDamping, m_YDamping, m_ZDamping), deltaTime);
-
You could also try other components, not just
-FramingTransposer here.
-
With the default tolerance=0.05, the result is shown
-below.
-
-
Camera jitters disappear. Note that the general fps is quite high
-(around 400~500). This is because our scene is quite simple, containing
-only a cube and a camera. In order to simulate a more real runtime game
-situation, I place 20k cubes in the scene and now the fps is around 30,
-but still unstable.
-
Below is the result when using the raw damping algorithm.
-
-
Camera jitters more severely due to a generally lower FPS. What about
-using the improved damping algorithm? Here is result with
-tolerance=0.05.
-
-
Just as expected, camera jitters do not show up. Let's try different
-tolerances. How will a small tolerance help
-alleviate jitters? Below is the result with
-tolerance=0.01.
-
-
Camera jitters occur again! This suggests that an excessively small
-value cannot fully filter out actions that can lead to camera jitters.
-Let's try our final experiment with tolerance=0.1.
-
-
Camera jitters disappear, but the camera motion seems a little stiff.
-These experiments show that an appropriate value of
-tolerance to ensure the smoothness and robustness of the
-camera.
-
Solution 2: adding low-pass
-filter
-
Our improved samping perfectly solves camera jitters under unstable
-fps, but it looks very stiff when it reaches the boundary of max damping
-distance. Can we make it more realistic so that the object won't just
-look stolid? Yes of course, we can add low-pass filter, or moving
-average to our improved damping to achieve more smooth results.
-
Recall the algorithm of the improved damping: if , we just set to .
-Instead of hard setting to , we introduce , the smoothed version
-of the original delta residual . If holds, we calculate as an average of
- and :
-
-
which can be iterated through a recursive form:
-
-
Note that gets
-updated if and only if holds,
-i.e., when the camera lies in the unstable area. The
-use of is similar to
-low-pass filters in the sense that they all filter out high-frequency
-signals.
-
Below is s sample code implementation in file
-Predictor.cs:
Let's try it out! With damping time and , we can achieve the
-following damping result with low-pass filter:
-
-
Now the camera looks much more smooth and, flexible. What about
-trying a smaller damp time, say, ? Here is the result:
-
-
The result is ok but sometimes it's still jittering. It is because a
-smaller leads to a larger and thus a larger chance to
-cause jitters. To solve this issue, we can set a larger tolerance , or we can have a smaller
-. We adopt a of and see how it performs.
-
-
The camera now becomes smooth again.
-
We can measure this sort of instability more quantitatively. Below is
-a graph plotting
-during five seconds of camera trace with damp time . The original curve (in blue)
-oscillates over time due to an instability of fps. The improved damping
-method eliminates all the oscillation and makes the curve absolutely
-plain. Empowered by low-pass filter, the curve becomes smooth without
-loss of stability.
-
-
Below is the graph with damp time . As can be seen, even with improved
-damping, the camera still has a chance to vibrate, and the original
-curve, oscillates much more intensely than with . Employing the low-pass filter
-gives a much smoother and stable camera motion curve, as expected.
-
-
Speaking of this, why can't we just soften our
-improved damping assignment to where is a function of
- parameterized by .
-
Assume
-and , we first calculate ;
-then calculate ;
-last, we have .
-For , we obtain . is a parameter controlling how fast the
-value of grows from
- to . The larger is, the larger mass will be
-concentrated on the
-side.
-
Below is the result with
-and :
-
-
Not bad! The soft version of improved damping really makes the camere
-smoother and less stiff than the vanilla improved damping algorithm. The
-follow plot also shows that with soft parameterization, the camera
-trajectory is much more natural with neglectable amount of
-oscillation.
-
-
We also compare it to different and . Beolow is the result with and .
-
-
A larger makes the camera more
-stiff, but is still better than the original improved damping
-algorithm.
-
Below is the result with
-and .
-
-
is less effective as the
-magnitude of attenuation it applies to is not enough to compensate
-for the osciallation the unstable fps brings about.
-
Let's try another damp time. The result with and shows as follows.
-
-
When , the oscillation is
-more severe, as we've already stated above. What about ?
-
-
Better, but still not sufficient to mitigate the oscillation. Let's
-try .
-
-
Almost perfect. We can conclude that a smaller needs a larger to offset the intense jitters resulted
-from unstable fps. Besides, you can combine the soft improved damping
-method and low-pass filters to achieve a smoother transition.
-
Solution 3: continuous
-residual
-
Okay, let's forget all aforementioned solutions and revisit our
-residual update formula at the very beginning:
-
-
Reformulate thie equation to the following form:
-
-
where is the speed of the
-camera's follow target, and is a more generalized form of the damping function .
-Theoretically, it can represent any function of interest.
-
Now, regarding as a function
-with respect to , we can seek to
-obtain its derivative:
-
-
We use the equality
-because when you plug
-into , you will get , implying .
-
What about derivatives with higher orders? We can calculate the
-second-order derivative as follows:
-
-
It is a nice form which bridges the first-order derivative and the second-order derivative
-. In fact, for any
--th order derivative, it can be
-recursively calculated as:
-
-
Having all these derivatives, we can then expand using Taylor series and
-calculate the difference to :
-
-
Note that if we are still choosing as out
-damping function , the
-derivative of it with respect to will be and the value at zero will be .
-
In practice, we first decide how many terms in the coefficient term
-should be taken in, and then sum them up and multiply with the velocity
-term, the result of which is denoted by . The residual at the current
-frame, can be readily computed as . To save
-computation, we can first cache
-up to a threshold, say , and
-then using the formula of geometric series to efficiently compute the
-coefficient sum.
-
To estimate its error, we use the Lagrange remainder:
-
-
Decompose it:
-
-
where
-and as we assumed. We can
-see that the error is asymptotically negligible with respect to , especially when is small.
-
Recall that
-where and
- is the damp time. If is large, say 0.5 or even 1.0, the
-value of will be
-somewhat small so that a decent precision can be reached within few
-steps of expansion, i.e., a small
-say 2 or 3 could satisfy camera stability. However, if is small, say 0.2 or 0.1 or even
-smaller, the value of
-would grow larger, and then a larger might be needed to reach our expected
-precision. This is in accordance with our observation that a smaller
- generally leads to a more
-unstable camera trajectory. We will show this soon.
-
Let's first try and . Recall that is the maximum order of derivatives we
-use to approximate the residual difference. means that we only use in the coefficient term. Here is
-the result:
-
-
Looks nice! What about setting ?
-
-
Not much difference, but a little bit smoother. Let's try respectively with and . First comes .
-
-
It's okay but it seems too fast when the cube comes back to
-stillness. How about ?
-
-
Now everything gets worked! Next, let's set smaller, which generally won't be used
-in actual gameplay but as a test it's worth a try. We set and try different to see how they influence our camera
-trajectory.
-
Here is the result with :
-
-
Okay... a total mess. Try :
-
-
Unfortunately, the cube always stays behind the camera. Now :
-
-
Forget about it ... Let's try :
-
-
Things are getting better! At least it does not shake anymore and
-begins to stay at the right position. I bet is better:
-
-
It's close! Last, we try :
-
-
Finally, the camera disposes everything well. As we can see from the
-process, a small requires a large
- to reach the minimum acceptable
-precision. I hope you never have the chance to use such a small , and if it happens, cache enough orders
-of derivatives or it would be prohibitively expensive to compute at
-runtime.
-
To further understand why this method solves the jittering issue, we
-take a deeper look at the expression of derived above.
-This is an ODE and we solve it out (proof left to the readers):
-
-
Here I've expanded as . We cannot
-directly use this explicit expression to calculate because there is no
-correct time stamp when game is running. What we only have
-is the previous frame's residual and the elapsed time at this frame
-. And as the velocity may change over time, a closed-form of
- cannor serve our purpose well.
-We can only incrementally calculate camera residuals at each frame based
-on what we currently have.
-
is a monotonic increasing
-function, and of course, it's continuous. The continuity ensures that
-the camera trajectory is always smooth and never jitters, if fps is
-sufficiently high (over one thousand I suppose?).
-
For the original discrete residual, its velocity is:
-
-
where is from
-Lagrange's Mean Value Theorem. Note that I add a tilde symbol over to distinguish it from the one from the
-continuos version above.
-
This is another ODE. We can solve it out (proof left to the
-readers):
-
-
Note that we solve the ODE with respect , the increment time rather than
-the absolute time . So, we
-introduce an initial value to control what the
-initial value of residual is at this frame, is now the elapsed time for this frame
-satisfying and .
-
The following graph shows that how the function changes with
-different and . It can be
-noticed that this function is very sensitive to the input , the elapsed time at this frame. A
-small change of the input would significantly change the sign of , thus causing camera jitters.
-We also notice that a smaller ,
-derived from a smaller , pushes
-the function leftwards, which also makes it more vulnerable to
-inputs.
-
-
Below is a comparison between five damping algorithms introduced in
-this article, including the original damping. Damp time is set to 0.2. We observe significant
-stability improvement when using any of the four proposed damping
-algorithms. You should be careful when choosing the most appropriate
-algorithm because the situation on which you intend to use damping. How
-unstable is your fps? What is the damp time ? How is the tracked object moving? You
-should experiment with these algorithms and choose the one that best
-suits your needs.
人物半身照:a {medium shot | over the shoulder shot | head and shoulder shot} portrait of [content], [art style], [lighting], [other keywords], [MJ parameters]
-
人物全身照:a full body portrait of [content], [art style], [lighting], [other keywords], [MJ parameters]
构图说明指定是怎样的构图,比如特写、近景、远景等等。有下面基本的构图:
-- 特写: closeup, portrait -
-全身照:full body, full body portrait -
-风景:wide angle, epic composition,
-low angle, high angle
-
Prompt一般直接用a [composition] of ...开头,其中[composition]就是你选择的构图,比如你想要一个特写,你就可以说a closeup shot of ...或者a headshot portrait of ...;如果你想要一个全身照,你就可以说a full body portrait of ...。
下图分别是特写/中景/远景的例子,Prompt为a [composition] of an old asian lady --ar 3:4 --q 1.5,其中[composition]分别替换为closeup shot,
-medium shot和full body portrait,同时把宽高比分别设置为3:4,
-2:3和9:16。最后一张图是风景图,Prompt是vast grassland, wide angle, epic composition --q 1.5 --ar 32:9:
-
-
你可以看到几种构图之间的差别,至于为什么要更改宽高比,详见下面的参数列表。
-
画面内容
-
画面内容指定画面内容。该部分根据需求可详可略,但一般都以多个短语组成,比如下面我想以凤凰为原型设计一个角色,全身照,有红色和黄色的花,穿着彩色华丽的装饰,因此输入的Prompt为a full body portrait of a phonix goddess, red and yellow blossoms, wearing rainbow opal accessories, exquisite decorations --ar 9:16
-
-
第一张图加了参数--test,因此细节更加丰富。
-
对于非人物也是相同的,比如我现在想设计一个亚特兰蒂斯城,它矗立在悬崖边,有着豪华的建筑,我就可以用the city of Atlantis on steep cliff, enormous beautiful palace, exquisit architecture --aspect 9:32 --q 1.5,得到下面的图:
图片的光影也是重要的一部分,我们可以直接指定光影的类型。比如我们以vast grassland with a lake in the center, a giant tree growing by the lake, --ar 16:9 --q 1.5为基础Prompt,分别考虑下述光影moody lighting,
-morning lighting, cinematic lighting,
-soft lighting, volumetric lighting,
-rembrandt lighting,
-godrays和chiaroscuro:
-
-
除了风景图之外,人物也可以应用不同的光影。下面以a full body portrait of a phoenix goddess, red and yellow blossoms, wearing rainbow opal accessories, exquisite decorations --ar 9:16 --q 1.5为基础Prompt,同样依次加入上面的光影设置:
下面以a medium shot portrait of a beautiful women in dark green kimono, beautiful face, smile, blue eyes, long black hair, painted by Anne Stokes, rembrandt lighting, [color], ultra detailed, plume --ar 2:3 --s 5000为基础Prompt,分别以vibrant color,
-prismatic, black and white,
-monochrome,
-colorful, rainbow为颜色关键词:
最后我把Prompt设定为a full body portrait of a wicked goddness, beautiful white dress, evil smile, red eyes, black wings, shining gold flowers on her hair, concept art, photo realistic, painted by Dorothea Tanning, back lighting, dramatic lighting, greyscale, intricate details, bold brushstrokes, mystical --ar 2:3,给出了下面的几张图(最后两张图使用了--testp):
因此,我们选用Prompta beautiful magnificent steampuck building by the seaside, view from the sea, rigorous architecture, ultra realistic, epic composition, wide angle, close up, morning lighting, volumetric lighting, warm colors, intricate details, 8K, hd, unreal engine, enchanting --ar 9:32 --test --creative,生成了下面几幅图:
-
-
加入一点艺术家得到下面的图(图一二painted by Earl Norem, Edwin Lord Weeks,图三painted by Elizabeth Shippen Green,图四Ford Madox Brown,图五Farel Dalrymple,图六François Schuiten,图七Franz Marc,图八Georges Rouault):
比如下面我用了一些不同的构图关键词去生成a beautiful girl(关键词分别是portrait,
-medium shot, full body shot,
-full body shot, dutch angle,
-portrait, dutch angle, depth of field,
-portrait, side view,
-full body shot, back view,
-full body shot, from above):
除了显式指定美术风格之外,你还可以指定艺术家和作品让画面偏向某种特定的风格。但是和MJ不同的是,NovelAI并不像MJ那样非常依赖艺术家,一般不加,或者最多只加一个艺术家或作品即可。比如下面我分别指定了ghibili,
-Hayao Miyazaki,
-breath of the wild和dark soul:
-]]>
-
- 随笔
-
-
- 随笔
- 计算机
- 游戏
- 深度学习
- 工具
- 绘画
-
-
-
- A Brief Note on Version Control and Project Organization
- /2022/11/21/16/03/
- This is a brief note about the e-book Version
-Control and Project Organization Best Practice Guide present at
-Unite 2022. In this book, we can learn fundamental concepts of version
-control and some best practices for organizing a Unity project. If you
-are new to Unity, or you prepare to set up a larger scale Unity project,
-this may be what you need.
-
-
Foundational concepts
-
-
Version control enables you to keep a historical
-track of your entire project. It facilitates your collaboration with
-your team through trackable and revertible commits organized in the form
-of timeline.
-
Why use version control
-
-
Useful for making experimental changes
-
Easy iteration
-
Avoid conflict
-
-
Centralized vs. distributed version control
-
-
Centralized: repository is resided in a dedicated server, and
-changes are fetched from and sent to the repository directly. To avoid
-conflicts, users can lock files for modification, which is known as
-checking out the file.
-
Distributed: users have local copy of the project and submit changes
-whenever they want without always working on the latest files like on a
-centralized system. But it costs a lot of space to store the entire
-history changes.
-
-
Typical workflow
-
-
Centralized
-
-
Update your working copy with changes from the server
-
Make your changes
-
Commit your changes to the central server
-
-
Distributed
-
-
Pull any remote changes into your local repo
-
Make changes
-
Commit changes
-
Push changes back to the remote repo
-
-
-
-
Best practices
-for organizing a Unity project
-
-
Folder structure
-
-
Recommendations:
-
-
Document your naming conventions and folder structure.
-
Be consistent with your naming convention.
-
Don't use spaces in file and folder names.
-
Separate testing or sandbox areas.
-
Avoid extra folders at the root level.
-
Keep your internal assets from third-party ones.
-
-
-
The .meta file: it holds information about the file
-which it is associated with, e.g., Textures, meshes, audio clips that
-have particular import settings.
-
Naming standards:
-
-
Use descriptive names, not abbreviate.
-
Use Camel case/Pascal case.
-
Use underscore sparingly.
-
Use number suffixes to denote a sequence.
-
Follow document naming.
-
-
Workflow optimization:
-
-
Split up your assets: break levels into smaller scenes, using
-SceneManager.LoadSceneAsync; break work up into Prefabs
-where possible.
-
Use Preset to save asset settings.
-
-
Code standards
-
-
Decide a code standard and stick with it.
-
When using namespace, break your folder structure up by the
-namespace for better organization.
-
Using a standard header.
-
Using script templates by creating an
-Assets/ScriptTemplates folder.
-
You can also use your own keywords and replace them with an Editor
-script implementing the OnWillCreateAsset method.
-
Git: Fork, GitKraken, VS Code, VS, SourceTree, Sublime Merge.
-
Perforce (Helix Core): see here
-to learn how to integrate Perforce into Unity.
-
Apache Subversion (SVN)
-
Plastic SCM: see here to learn more
-about Plastic SCM.
-
-
-
Settings up
-Unity to work with version control
-
-
Editor project settings
-
-
Perforce: Edit -> Project Settings -> Version Control ->
-Mode.
-
Plastic SCM: click the Plastic SCM icon in the toolbar on the top
-right in Unity Editor.
-
-
What to ignore: Do not commit the Library folder, as
-well as the .exe or .apk files.
-
Work with large files: teams prefer a centralized workflow where
-large binary files would only on a central server with individual users
-only accessing the latest version on their machines, rather than a
-distributed one where many copies of historical files are stored on
-local machines. If using Git, be sure to include Git LFS.
-
-
Best practices for version
-control
-
Some suggestions you may need to make teamwork more efficient:
-
-
Commit little, commit often.
-
Keep commit messages clean.
-
Avoid indiscriminate commits. It is important to understand that you
-should only commit what you have changed in the project.
Perforce: get latest -> check out files -> edit ->
-submit
-
-
Know your toolset
-
-
Git: UI client
-
Plastic SCM: Gluon
-
Perforce Helix Core: built-in Unity Editor tools
-
-
Feature branches and Git Flow: main, hotfix, release, develop, etc.
-Both Plastic SCM and Perforce have automated tools to help manage
-merging branches back into mainline. Plastic SCM does this with the help
-of MergeBot,
-and Perforce uses Helix Swarm for
-managing code reviews that can also be set up with automated
-testing.
-
-
The biggest takeaway is the importance of clear team communication.
-As a team, you need to agree on your guidelines
-]]>
-
- 随笔
-
-
- 随笔
- 项目管理
-
-
-
- 《Assassain's Creed Odyssey》玩后感
- /2020/09/08/18/39/
- 115小时,主线+两个DLC通关。奥德赛是一个优秀的游戏,但不是神作,在走向RPG的道路上是系列作品的里程碑。人物塑造方面,奥德赛无疑是成功的,由于我选择的是温情
-路线,所以我看到的马拉卡的内心是无比渴望家庭的温暖的,正如奥德赛音乐集的“Odyssey(Modern
-Version)”中的歌词"Travel in path alone, back to the warmth of
-home"一般,踏在异乡的每一步无一不通往家。本作两个主要配角Phoibe和Brasidas,都令人印象深刻。在画面上,优良的美工沿袭了育碧式BUG,有可能在欣赏风景的时候会卡到墙里面,有时候还挺扫兴的。值得一提的是,奥德赛对希腊风情的刻画非常优秀,每一个同步点都可以疯狂截图作壁纸。本作比较难受的地方是剧情,三条主线除了家庭线比较完整之外,另外两条主线都虎头蛇尾,过多的无用支线严重稀释了主线的紧凑感,让人在玩到一半之后想去玩巫师三。DLC1的整体剧情我个人比较喜欢,但是一些细节处理尤其不妥,动机不足导致DLC1评分降低。DLC2流程很长,总体来说冥界剧情好于亚特兰蒂斯剧情好于极乐世界,不过对神之领域的描绘总体来说很精彩。如果说我对11.10发售的英灵殿有什么值得期待的话,可以用几句话总结:剧情不要拉跨,任务尽量优化;BUG可以少些,卡墙不动尴尬;风景依旧如画,壮汉赶紧来吧!
-
-
人物半身照:a {medium shot | over the shoulder shot | head and shoulder shot} portrait of [content], [art style], [lighting], [other keywords], [MJ parameters]
+
人物全身照:a full body portrait of [content], [art style], [lighting], [other keywords], [MJ parameters]
构图说明指定是怎样的构图,比如特写、近景、远景等等。有下面基本的构图:
+- 特写: closeup, portrait -
+全身照:full body, full body portrait -
+风景:wide angle, epic composition,
+low angle, high angle
+
Prompt一般直接用a [composition] of ...开头,其中[composition]就是你选择的构图,比如你想要一个特写,你就可以说a closeup shot of ...或者a headshot portrait of ...;如果你想要一个全身照,你就可以说a full body portrait of ...。
下图分别是特写/中景/远景的例子,Prompt为a [composition] of an old asian lady --ar 3:4 --q 1.5,其中[composition]分别替换为closeup shot,
+medium shot和full body portrait,同时把宽高比分别设置为3:4,
+2:3和9:16。最后一张图是风景图,Prompt是vast grassland, wide angle, epic composition --q 1.5 --ar 32:9:
+
+
你可以看到几种构图之间的差别,至于为什么要更改宽高比,详见下面的参数列表。
+
画面内容
+
画面内容指定画面内容。该部分根据需求可详可略,但一般都以多个短语组成,比如下面我想以凤凰为原型设计一个角色,全身照,有红色和黄色的花,穿着彩色华丽的装饰,因此输入的Prompt为a full body portrait of a phonix goddess, red and yellow blossoms, wearing rainbow opal accessories, exquisite decorations --ar 9:16
+
+
第一张图加了参数--test,因此细节更加丰富。
+
对于非人物也是相同的,比如我现在想设计一个亚特兰蒂斯城,它矗立在悬崖边,有着豪华的建筑,我就可以用the city of Atlantis on steep cliff, enormous beautiful palace, exquisit architecture --aspect 9:32 --q 1.5,得到下面的图:
图片的光影也是重要的一部分,我们可以直接指定光影的类型。比如我们以vast grassland with a lake in the center, a giant tree growing by the lake, --ar 16:9 --q 1.5为基础Prompt,分别考虑下述光影moody lighting,
+morning lighting, cinematic lighting,
+soft lighting, volumetric lighting,
+rembrandt lighting,
+godrays和chiaroscuro:
+
+
除了风景图之外,人物也可以应用不同的光影。下面以a full body portrait of a phoenix goddess, red and yellow blossoms, wearing rainbow opal accessories, exquisite decorations --ar 9:16 --q 1.5为基础Prompt,同样依次加入上面的光影设置:
下面以a medium shot portrait of a beautiful women in dark green kimono, beautiful face, smile, blue eyes, long black hair, painted by Anne Stokes, rembrandt lighting, [color], ultra detailed, plume --ar 2:3 --s 5000为基础Prompt,分别以vibrant color,
+prismatic, black and white,
+monochrome,
+colorful, rainbow为颜色关键词:
最后我把Prompt设定为a full body portrait of a wicked goddness, beautiful white dress, evil smile, red eyes, black wings, shining gold flowers on her hair, concept art, photo realistic, painted by Dorothea Tanning, back lighting, dramatic lighting, greyscale, intricate details, bold brushstrokes, mystical --ar 2:3,给出了下面的几张图(最后两张图使用了--testp):
因此,我们选用Prompta beautiful magnificent steampuck building by the seaside, view from the sea, rigorous architecture, ultra realistic, epic composition, wide angle, close up, morning lighting, volumetric lighting, warm colors, intricate details, 8K, hd, unreal engine, enchanting --ar 9:32 --test --creative,生成了下面几幅图:
+
+
加入一点艺术家得到下面的图(图一二painted by Earl Norem, Edwin Lord Weeks,图三painted by Elizabeth Shippen Green,图四Ford Madox Brown,图五Farel Dalrymple,图六François Schuiten,图七Franz Marc,图八Georges Rouault):
比如下面我用了一些不同的构图关键词去生成a beautiful girl(关键词分别是portrait,
+medium shot, full body shot,
+full body shot, dutch angle,
+portrait, dutch angle, depth of field,
+portrait, side view,
+full body shot, back view,
+full body shot, from above):
除了显式指定美术风格之外,你还可以指定艺术家和作品让画面偏向某种特定的风格。但是和MJ不同的是,NovelAI并不像MJ那样非常依赖艺术家,一般不加,或者最多只加一个艺术家或作品即可。比如下面我分别指定了ghibili,
+Hayao Miyazaki,
+breath of the wild和dark soul:
+]]>
+
+ 随笔
+
+
+ 随笔
+ 计算机
+ 游戏
+ 工具
+ 绘画
+ 深度学习
+
+
+
+ The Jittering Issue with Damping in Cinemachine and How to Tackle it
+ /2023/07/08/18/22/
+ If you are familiar with Cinemachine. you probably know there is a
+knotty problem with Cinemachine' damping if you are using
+Framing Transposer or some other components to track a
+follow point. That is, the camera jitters with damping enabled under
+unstable frame rate. The more unstable frame rate is, the more heavily
+camera will jitter. This post will discuss this phenomenon and proposes
+a workaround to solve this issue.
+
+
Camera jitters with
+damping in Cinemachine
+
Unity's Cinemachine has a notoriously severe problem that may cause
+the follow object to seemingly jitter when you are using the
+Framing Transposer component with damping enabled.
+
To show this, I did a simple experiment. I created a new blank scene
+and spawned a new attached with the following script:
if (elapsedTime >= 5.0f) { if (currentSpeed > 0.0f) { currentSpeed = 0.0f; } else { currentSpeed = speed; }
elapsedTime = 0.0f; }
transform.position += new Vector3(1, 0, 0) * currentSpeed * Time.deltaTime;
} }
+
This script moves the cube for 5 seconds and then keeps it steady for
+another 5 seconds and continues moving. The move speed as
+well as the fps (frames per second) can be set for test
+under different conditions.
+
A new virtual camera is then created with a
+Framing Transposer component following this cube. A default
+damping of 0.2 is used.
+
Here is result with speed is 100 and fps is
+0 (when set to 0, the real fps is determined by Unity, may but
+unstable).
+
+
The jitters are very clear. You can also notice that the frame rate
+(presented in the Statistics panel) is very unstable, and we will know
+soon it is the unstable fps that results in camera jitters.
+
Cinemachine proposes a workaround to alleviate this problem, that is,
+to use the revised version of damping where they sub-divide each frame
+and simulates damping in the consecutive series of sub-frames. To enable
+this functionality, go to Edit -> Project Settings -> Player ->
+Script Compilation and add the
+CINEMACHINE_EXPERIMENTAL_DAMPING marco to it.
+
+
OKay, now we have enabled the new damping algorithm and let's see how
+it will mitigate the jittering issue. Here is result with the same
+setting we used in our previous experiment, i.e., speed is
+100 and fps is 0.
+
+
It is astonishing to see the jittering issue becomes even more
+severe. I conjecture that the variance of fps will significantly amplify
+camera jitters when this feature is enabled. In other words, the
+experimental damping algorithm responds to the variance of fps in a
+NON-linear way: when the variance is small, the experiment damping will
+reduce the gaps of camera location between contiguous frames; but when
+the variance is large, it will enlarge the gaps, leading to unacceptable
+jittering. (Note: I did not validate this conjecture. If you are
+interested, just review the code and test it yourself.)
+
What about the expected result if fps is stable? Let's take more
+experiments!
+
Here is result with speed is 100 and fps is
+120 (very high fps, which is usually prohibitive in shipped games).
+
+
Very steady camera! What about setting fps to 60? Here
+is the result.
+
+
An fps of 60 performs equally well with 120, which is anticipated as
+fps is stable. Okay, let's try a final experiment where fps is set at an
+extreme value of 20.
+
+
Even a low fps of 20 makes our camera stable, only if fps itself is
+stable.
+
Now we can conclude that it is the instability of fps that induces
+camera jitters, regardless of the exact value of fps. But, why?
+
Why camera jitters
+
Before answering this question, let us first take a look at the
+source of damping implemented in Cinemachine.
#if CINEMACHINE_EXPERIMENTAL_DAMPING // Try to reduce damage caused by frametime variability float step = Time.fixedDeltaTime; if (deltaTime != step) step /= 5; int numSteps = Mathf.FloorToInt(deltaTime / step); float vel = initial * step / deltaTime; float decayConstant = Mathf.Exp(-k * step); float r = 0; for (int i = 0; i < numSteps; ++i) r = (r + vel) * decayConstant; float d = deltaTime - (step * numSteps); if (d > Epsilon) r = Mathf.Lerp(r, (r + vel) * decayConstant, d / step); return initial - r; #else return initial * (1 - Mathf.Exp(-k * deltaTime)); #endif }
+
Translating into mathematics, we have:
+
+
where is the damp time
+parameter and the elapsed
+time in this frame. This equation decays the input , the distance for the
+camera to go to the desired position, by an exponential factor . If , the residual will be , meaning that at
+this frame, the camera will traverse 99% of the desired distance to go,
+only remaining 1% amount for future frames.
+
OK, let's assume we've placed a cube in the origin and it moves along
+the x-axis at a fixed speed, say,
+m/s. A camera is placed to track the cube with damping where damp time
+. Let's further denote the
+delta time for each frame by , where is the -th
+frame.
+
Having all variables fully prepared, we can then simulate the object
+movement and camera track process.
+
In the beginning of 0-th frame, the camera and the cube are both at
+the origin, i.e., (0, 0, 0). As the cube only moves along x-axis, we can
+emit the y and z dimensions and use a one-dimensional coordiante to
+represent cube and camera positions.
+
At the 1-th frame, the cube moves to , the
+distance the camera traverses is , and the residual is . We set for simplicity.
+
At the 2-th frame, the cube moves to , the distance the camera traverses is
+, and the residual is .
+
At the k-th frame, we have ,
+, and
+.
+
Without loss of generality, we can set . The following sections will use this
+settings unless otherwise stated.
+
For different combinations of , may have different results. Let's dive into and see how it influences
+the results.
+
Case 1: Stable FPS,
+all are equal
+
When all are
+equal, say , our equations
+reduce to:
+
+
apparently has a
+limitation of when
+since . This
+explains why a camera with damping always has a maximum distance to its
+following target. There maximum distance, also the supremum, is exactly
+. When is larger, will be larger, implying the
+maximum distance between the camera and its following target will be
+larger.
+
What if is
+mutable? In this case, we can assume there exists an upper bound such that all satisty . Then we are
+able to derive the same conclusion.
+
Another question is, why camera does not jitter when FPS is stable?
+We turn to examine the sign of :
+
+
Therefore, when FPS is stable, is always larger than , and jitter will never
+happen.
+
Case 2: Unstable FPS, vary
+
When FPS is unstable, where may mutate, how will the camera move in response to its
+following target? We can still examine the sign of , but in another
+way:
+
+
This equation uncovers why camera jitters happen with unstable FPS.
+The residual at the k-th frame is essentially an interpolation
+between the following target's current position increment and the last frame's
+negative residual , where
+the interpolation strength is the decaying factor .
+As both and are fixed, a change in will incline the resulting
+residual to different ends,
+either or .
+
In our simplified case in which the target moves at a fixed speed in
+the direction of x-axis, will always be positive (though its magnitude can vary) and
+ will always be negative.
+A mutating thus has
+a chance to alter the sign of , which further brings
+about camera jitters.
+
So when will camera jitter? From the above equation, we know that
+camera will jitter when the sign of consistently changes
+over time, i.e., the value of oscillates around zero.
+Let's make it equal to zero and see what we can find then.
+
+
This equation tells us when is near , camera will have a large chance to jitter. This
+motivates us to improve damping by filtering out the occasions where
+ is very close to
+.
+
What about going deeper? We can treat as variable, and all other
+as constants. This abstraction gives us a function of :
+
+
Taking the derivative of ,
+we know that is monotonically
+decreasing when and monotonically increasing when , and . Hence, to make the sign of mutable, must be positive and the
+minimum of must be
+negative.
+
The minimum of can be
+easily computed:
+
+
The last inequality holds because .
+
+
This reveals the fact that: when , a variant is likely to cause to change its sign,
+thus resulting in camera jitters. Suppose is large enough, so then
+the k-th residual gets
+smaller than while is positive. A smaller pushes to become smaller for the next frame,
+which further pushes the root of the function to become larger. In this
+case, even with the same delta time, will have a larger chance wo
+fall in the negative area, i.e., is more likely to be less than the root.
+
+
Solutions
+
Solution 1: imposing an
+invalid range
+
Based on what we've discussed so far, we can immediately come up with
+a simple solution: enforce to be if they are very close. That is to say, we use a
+small value , if , we just set to .
+
Note that can be zero or negative. If this is the case, we keep the
+original without
+doing anything. Besides, you should be aware that here is not the time this
+frame actually takes, instead, it is just the duration used to calculate
+damping.
+
Let us explain it more quantitatively. Suppose ,
+where .
+Then according to our algorithm. We then
+plug into the original
+expression of :
+
+
This demonstrates that now the camera lags behind its following
+target more than the previous frame since the residual is larger. After
+substituting
+with , would be zero, meaning
+that the camera now keeps the same frame as last frame. Camera does not
+jitter.
+
Here comes the question: what if the following target slows down, or
+stops, or even turns back to the opposite direction and the camera still
+remains the same residual to it?
+
It is quite a good question. But if we look carefully at the function
+of , we will find this
+situation will never happen. Let's rewrite here:
+
+
This time, we do not constrain the value of , but at last frame, it's positive.
+
When gets smaller but still
+positive, we observe the function gradually shifts leftwards, pushing
+the root towards zero. This implies that the area gets
+contracted and the probability of remaining the same residual gets
+smaller.
+
+
When is zero where the
+following target stops, the current residual can be readily calculated
+as , which closes the distance gap between the camera
+of the following target. The ratio, which is calculated as , would be
+devided by zero, outputting an infinite value.
+
When is negative, will be negative. The
+ratio
+now becomes negative, also beyond the range of .
+
We can implement this algorithm in less than 100 lines of code. You
+should modify three files in the official Cinemachine source code
+directory.
+
First is Predictor.cs. Add a ImprovedDamp
+function:
The input bonus is . Parameter tolerance is what you should set
+as we've introduced
+above.
+
In file CinemachineVirtualCameraBase.cs, add a new
+function ImprovedDetachedFollowTargetDamp:
+
public Vector3 ImprovedDetachedFollowTargetDamp(Vector3 initial, Vector3 dampTime, float deltaTime) { GameObject go = GameObject.Find("Cube"); // Hard find our following target of interest, you should not do like this! Vector3 deltaDistance = new Vector3(100, 0, 0) * deltaTime; // Hard set the velocity, you should not do like this! Vector3 residual = initial - deltaDistance; Vector3 bonus = new Vector3(residual.x / (residual.x + deltaDistance.x + 1e-7f), residual.y / (residual.y + deltaDistance.y + 1e-7f), residual.z / (residual.z + deltaDistance.z + 1e-7f));
This piece of code is very informal, and you should never write your
+code like this. The purpose of this function is to get and . I reckon the correct
+way to do this is to create a new (or two) variable in the
+CinemachineVirtualCameraBase class and update it in each
+tick. The code presented here is only for demonstration.
+
In file CinemachineFramingTransposer.cs, change the
+called function for damping:
+
cameraOffset = VirtualCamera.ImprovedDetachedFollowTargetDamp( // Original is DetachedFollowTargetDamp cameraOffset, new Vector3(m_XDamping, m_YDamping, m_ZDamping), deltaTime);
+
You could also try other components, not just
+FramingTransposer here.
+
With the default tolerance=0.05, the result is shown
+below.
+
+
Camera jitters disappear. Note that the general fps is quite high
+(around 400~500). This is because our scene is quite simple, containing
+only a cube and a camera. In order to simulate a more real runtime game
+situation, I place 20k cubes in the scene and now the fps is around 30,
+but still unstable.
+
Below is the result when using the raw damping algorithm.
+
+
Camera jitters more severely due to a generally lower FPS. What about
+using the improved damping algorithm? Here is result with
+tolerance=0.05.
+
+
Just as expected, camera jitters do not show up. Let's try different
+tolerances. How will a small tolerance help
+alleviate jitters? Below is the result with
+tolerance=0.01.
+
+
Camera jitters occur again! This suggests that an excessively small
+value cannot fully filter out actions that can lead to camera jitters.
+Let's try our final experiment with tolerance=0.1.
+
+
Camera jitters disappear, but the camera motion seems a little stiff.
+These experiments show that an appropriate value of
+tolerance to ensure the smoothness and robustness of the
+camera.
+
Solution 2: adding low-pass
+filter
+
Our improved samping perfectly solves camera jitters under unstable
+fps, but it looks very stiff when it reaches the boundary of max damping
+distance. Can we make it more realistic so that the object won't just
+look stolid? Yes of course, we can add low-pass filter, or moving
+average to our improved damping to achieve more smooth results.
+
Recall the algorithm of the improved damping: if , we just set to .
+Instead of hard setting to , we introduce , the smoothed version
+of the original delta residual . If holds, we calculate as an average of
+ and :
+
+
which can be iterated through a recursive form:
+
+
Note that gets
+updated if and only if holds,
+i.e., when the camera lies in the unstable area. The
+use of is similar to
+low-pass filters in the sense that they all filter out high-frequency
+signals.
+
Below is s sample code implementation in file
+Predictor.cs:
Let's try it out! With damping time and , we can achieve the
+following damping result with low-pass filter:
+
+
Now the camera looks much more smooth and, flexible. What about
+trying a smaller damp time, say, ? Here is the result:
+
+
The result is ok but sometimes it's still jittering. It is because a
+smaller leads to a larger and thus a larger chance to
+cause jitters. To solve this issue, we can set a larger tolerance , or we can have a smaller
+. We adopt a of and see how it performs.
+
+
The camera now becomes smooth again.
+
We can measure this sort of instability more quantitatively. Below is
+a graph plotting
+during five seconds of camera trace with damp time . The original curve (in blue)
+oscillates over time due to an instability of fps. The improved damping
+method eliminates all the oscillation and makes the curve absolutely
+plain. Empowered by low-pass filter, the curve becomes smooth without
+loss of stability.
+
+
Below is the graph with damp time . As can be seen, even with improved
+damping, the camera still has a chance to vibrate, and the original
+curve, oscillates much more intensely than with . Employing the low-pass filter
+gives a much smoother and stable camera motion curve, as expected.
+
+
Speaking of this, why can't we just soften our
+improved damping assignment to where is a function of
+ parameterized by .
+
Assume
+and , we first calculate ;
+then calculate ;
+last, we have .
+For , we obtain . is a parameter controlling how fast the
+value of grows from
+ to . The larger is, the larger mass will be
+concentrated on the
+side.
+
Below is the result with
+and :
+
+
Not bad! The soft version of improved damping really makes the camere
+smoother and less stiff than the vanilla improved damping algorithm. The
+follow plot also shows that with soft parameterization, the camera
+trajectory is much more natural with neglectable amount of
+oscillation.
+
+
We also compare it to different and . Beolow is the result with and .
+
+
A larger makes the camera more
+stiff, but is still better than the original improved damping
+algorithm.
+
Below is the result with
+and .
+
+
is less effective as the
+magnitude of attenuation it applies to is not enough to compensate
+for the osciallation the unstable fps brings about.
+
Let's try another damp time. The result with and shows as follows.
+
+
When , the oscillation is
+more severe, as we've already stated above. What about ?
+
+
Better, but still not sufficient to mitigate the oscillation. Let's
+try .
+
+
Almost perfect. We can conclude that a smaller needs a larger to offset the intense jitters resulted
+from unstable fps. Besides, you can combine the soft improved damping
+method and low-pass filters to achieve a smoother transition.
+
Solution 3: continuous
+residual
+
Okay, let's forget all aforementioned solutions and revisit our
+residual update formula at the very beginning:
+
+
Reformulate thie equation to the following form:
+
+
where is the speed of the
+camera's follow target, and is a more generalized form of the damping function .
+Theoretically, it can represent any function of interest.
+
Now, regarding as a function
+with respect to , we can seek to
+obtain its derivative:
+
+
We use the equality
+because when you plug
+into , you will get , implying .
+
What about derivatives with higher orders? We can calculate the
+second-order derivative as follows:
+
+
It is a nice form which bridges the first-order derivative and the second-order derivative
+. In fact, for any
+-th order derivative, it can be
+recursively calculated as:
+
+
Having all these derivatives, we can then expand using Taylor series and
+calculate the difference to :
+
+
Note that if we are still choosing as out
+damping function , the
+derivative of it with respect to will be and the value at zero will be .
+
In practice, we first decide how many terms in the coefficient term
+should be taken in, and then sum them up and multiply with the velocity
+term, the result of which is denoted by . The residual at the current
+frame, can be readily computed as . To save
+computation, we can first cache
+up to a threshold, say , and
+then using the formula of geometric series to efficiently compute the
+coefficient sum.
+
To estimate its error, we use the Lagrange remainder:
+
+
Decompose it:
+
+
where
+and as we assumed. We can
+see that the error is asymptotically negligible with respect to , especially when is small.
+
Recall that
+where and
+ is the damp time. If is large, say 0.5 or even 1.0, the
+value of will be
+somewhat small so that a decent precision can be reached within few
+steps of expansion, i.e., a small
+say 2 or 3 could satisfy camera stability. However, if is small, say 0.2 or 0.1 or even
+smaller, the value of
+would grow larger, and then a larger might be needed to reach our expected
+precision. This is in accordance with our observation that a smaller
+ generally leads to a more
+unstable camera trajectory. We will show this soon.
+
Let's first try and . Recall that is the maximum order of derivatives we
+use to approximate the residual difference. means that we only use in the coefficient term. Here is
+the result:
+
+
Looks nice! What about setting ?
+
+
Not much difference, but a little bit smoother. Let's try respectively with and . First comes .
+
+
It's okay but it seems too fast when the cube comes back to
+stillness. How about ?
+
+
Now everything gets worked! Next, let's set smaller, which generally won't be used
+in actual gameplay but as a test it's worth a try. We set and try different to see how they influence our camera
+trajectory.
+
Here is the result with :
+
+
Okay... a total mess. Try :
+
+
Unfortunately, the cube always stays behind the camera. Now :
+
+
Forget about it ... Let's try :
+
+
Things are getting better! At least it does not shake anymore and
+begins to stay at the right position. I bet is better:
+
+
It's close! Last, we try :
+
+
Finally, the camera disposes everything well. As we can see from the
+process, a small requires a large
+ to reach the minimum acceptable
+precision. I hope you never have the chance to use such a small , and if it happens, cache enough orders
+of derivatives or it would be prohibitively expensive to compute at
+runtime.
+
To further understand why this method solves the jittering issue, we
+take a deeper look at the expression of derived above.
+This is an ODE and we solve it out (proof left to the readers):
+
+
Here I've expanded as . We cannot
+directly use this explicit expression to calculate because there is no
+correct time stamp when game is running. What we only have
+is the previous frame's residual and the elapsed time at this frame
+. And as the velocity may change over time, a closed-form of
+ cannor serve our purpose well.
+We can only incrementally calculate camera residuals at each frame based
+on what we currently have.
+
is a monotonic increasing
+function, and of course, it's continuous. The continuity ensures that
+the camera trajectory is always smooth and never jitters, if fps is
+sufficiently high (over one thousand I suppose?).
+
For the original discrete residual, its velocity is:
+
+
where is from
+Lagrange's Mean Value Theorem. Note that I add a tilde symbol over to distinguish it from the one from the
+continuos version above.
+
This is another ODE. We can solve it out (proof left to the
+readers):
+
+
Note that we solve the ODE with respect , the increment time rather than
+the absolute time . So, we
+introduce an initial value to control what the
+initial value of residual is at this frame, is now the elapsed time for this frame
+satisfying and .
+
The following graph shows that how the function changes with
+different and . It can be
+noticed that this function is very sensitive to the input , the elapsed time at this frame. A
+small change of the input would significantly change the sign of , thus causing camera jitters.
+We also notice that a smaller ,
+derived from a smaller , pushes
+the function leftwards, which also makes it more vulnerable to
+inputs.
+
+
Below is a comparison between five damping algorithms introduced in
+this article, including the original damping. Damp time is set to 0.2. We observe significant
+stability improvement when using any of the four proposed damping
+algorithms. You should be careful when choosing the most appropriate
+algorithm because the situation on which you intend to use damping. How
+unstable is your fps? What is the damp time ? How is the tracked object moving? You
+should experiment with these algorithms and choose the one that best
+suits your needs.
+]]>
+
+ 游戏 - 动画
+
+
+ 数学
+ 随笔
+ 计算机
+ 游戏
+ 动画
+ 深度学习
+ 机器学习
+
+
+
+ A Brief Note on Version Control and Project Organization
+ /2022/11/21/16/03/
+ This is a brief note about the e-book Version
+Control and Project Organization Best Practice Guide present at
+Unite 2022. In this book, we can learn fundamental concepts of version
+control and some best practices for organizing a Unity project. If you
+are new to Unity, or you prepare to set up a larger scale Unity project,
+this may be what you need.
+
+
Version control enables you to keep a historical
+track of your entire project. It facilitates your collaboration with
+your team through trackable and revertible commits organized in the form
+of timeline.
Centralized: repository is resided in a dedicated server, and
+changes are fetched from and sent to the repository directly. To avoid
+conflicts, users can lock files for modification, which is known as
+checking out the file.
+
Distributed: users have local copy of the project and submit changes
+whenever they want without always working on the latest files like on a
+centralized system. But it costs a lot of space to store the entire
+history changes.
+
+
Typical workflow
-
目标是开发一个具有足够普适性的游戏引擎。
+
Centralized
+
+
Update your working copy with changes from the server
Document your naming conventions and folder structure.
+
Be consistent with your naming convention.
+
Don't use spaces in file and folder names.
+
Separate testing or sandbox areas.
+
Avoid extra folders at the root level.
+
Keep your internal assets from third-party ones.
+
+
+
The .meta file: it holds information about the file
+which it is associated with, e.g., Textures, meshes, audio clips that
+have particular import settings.
Perforce: Edit -> Project Settings -> Version Control ->
+Mode.
+
Plastic SCM: click the Plastic SCM icon in the toolbar on the top
+right in Unity Editor.
+
+
What to ignore: Do not commit the Library folder, as
+well as the .exe or .apk files.
+
Work with large files: teams prefer a centralized workflow where
+large binary files would only on a central server with individual users
+only accessing the latest version on their machines, rather than a
+distributed one where many copies of historical files are stored on
+local machines. If using Git, be sure to include Git LFS.
-
走向线上
+
Best practices for version
+control
+
Some suggestions you may need to make teamwork more efficient:
-
玩家玩在线游戏的动机与游戏的娱乐层面和再现层面都有关系。
+
Commit little, commit often.
+
Keep commit messages clean.
+
Avoid indiscriminate commits. It is important to understand that you
+should only commit what you have changed in the project.
Feature branches and Git Flow: main, hotfix, release, develop, etc.
+Both Plastic SCM and Perforce have automated tools to help manage
+merging branches back into mainline. Plastic SCM does this with the help
+of MergeBot,
+and Perforce uses Helix Swarm for
+managing code reviews that can also be set up with automated
+testing.
+
The biggest takeaway is the importance of clear team communication.
+As a team, you need to agree on your guidelines
]]>
- 游戏 - 游戏理论
+ 随笔随笔
- 游戏
+ 项目管理
@@ -3840,2447 +3296,2991 @@ B-spline (NURBS)
- ComponentCameraSystem - A Simplified Camera System for You to Create Plentiful Gameplay Camera Movements and Effects
- /2023/02/09/23/21/
- ComponentCameraSystem is a simplified, extensible
-and designer-friendly camera system for Unreal Engine. It enhances the
-built-in spring arm and camera components in native Unreal editor across
-a wide variety of common gameplay camera behaviours such as keeping a
-target at a fixed screen position, moving on rail, and resolving
-occulusion in complex occasions, enabling you to easily create plentiful
-smooth camera movements and effects within only few minutes. Go to the
-Documentation
-for more details.
-
Currently ComponentCameraSystem supports Unreal
-Engine versions >= 5.0. So before using this plugin, please upgrade
-your project to Unreal Engine 5.0 at its minimum version
-requirement.
-
You can buy this plugin at Unreal Marketplace.
-Persistent upgrade will be made to make it more stable and support more
-features.
-]]>
-
- 游戏 - 相机
-
-
- 计算机
- UE
- 相机
- 游戏
- 插件
-
-
-
- 无穷连根式求极限的充要条件
- /2020/09/09/18/52/
- 在刷习题集或者考试的时候我们经常会遇到诸如或者的极限求解或极限存在性证明。解决此类问题的方法有很多,但都可以归结为一点:缩放。要么是两端缩放然后夹逼定理,要么是证明有界然后两边取极限。本文记录此类问题极限存在的一个充要条件,以供参阅。
+ 《Assassain's Creed Odyssey》玩后感
+ /2020/09/08/18/39/
+ 115小时,主线+两个DLC通关。奥德赛是一个优秀的游戏,但不是神作,在走向RPG的道路上是系列作品的里程碑。人物塑造方面,奥德赛无疑是成功的,由于我选择的是温情
+路线,所以我看到的马拉卡的内心是无比渴望家庭的温暖的,正如奥德赛音乐集的“Odyssey(Modern
+Version)”中的歌词"Travel in path alone, back to the warmth of
+home"一般,踏在异乡的每一步无一不通往家。本作两个主要配角Phoibe和Brasidas,都令人印象深刻。在画面上,优良的美工沿袭了育碧式BUG,有可能在欣赏风景的时候会卡到墙里面,有时候还挺扫兴的。值得一提的是,奥德赛对希腊风情的刻画非常优秀,每一个同步点都可以疯狂截图作壁纸。本作比较难受的地方是剧情,三条主线除了家庭线比较完整之外,另外两条主线都虎头蛇尾,过多的无用支线严重稀释了主线的紧凑感,让人在玩到一半之后想去玩巫师三。DLC1的整体剧情我个人比较喜欢,但是一些细节处理尤其不妥,动机不足导致DLC1评分降低。DLC2流程很长,总体来说冥界剧情好于亚特兰蒂斯剧情好于极乐世界,不过对神之领域的描绘总体来说很精彩。如果说我对11.10发售的英灵殿有什么值得期待的话,可以用几句话总结:剧情不要拉跨,任务尽量优化;BUG可以少些,卡墙不动尴尬;风景依旧如画,壮汉赶紧来吧!
-
Aspect ratio: 长宽比,对于standard definition television
-(SDTV)来说,是4:3,对high-definition television
-(HDTV)是16:9。对电影投影中使用最多的是1.85 (CinemaScope)和2.35
-(anamorphic)
-
Refresh or frame rate: 刷新率
-
Refresh and motion blur:
-刷新模糊指显示设备刷新率不能与计算机的输出同步,运动模糊是因为物体运动快于曝光时间
+]]>
+
+ 游戏 - 游戏理论
+
+
+ 随笔
+ 游戏
+
+
+
+ ComponentCameraSystem - A Simplified Camera System for You to Create Plentiful Gameplay Camera Movements and Effects
+ /2023/02/09/23/21/
+ ComponentCameraSystem is a simplified, extensible
+and designer-friendly camera system for Unreal Engine. It enhances the
+built-in spring arm and camera components in native Unreal editor across
+a wide variety of common gameplay camera behaviours such as keeping a
+target at a fixed screen position, moving on rail, and resolving
+occulusion in complex occasions, enabling you to easily create plentiful
+smooth camera movements and effects within only few minutes. Go to the
+Documentation
+for more details.
+
Currently ComponentCameraSystem supports Unreal
+Engine versions >= 5.0. So before using this plugin, please upgrade
+your project to Unreal Engine 5.0 at its minimum version
+requirement.
+
You can buy this plugin at Unreal Marketplace.
+Persistent upgrade will be made to make it more stable and support more
+features.
A control reference frame refers to a relationship between changes to
-the controller input and how it affects movement or other aspects of
-player control.
Game hints are script objects that provide one possible type of
-runtime mechanism by which designers can override player properties,
-game controls or camera properties according to events
Vec3 offset(0.0f, cosf(pitch) * distance, sinf(pitch) * distance); // construct a rotation matrix around the world up-axis Mat3 rotation = Mat3::ZRotation(yaw); mOffset = rotation * offset; Vec3 desiredPosition = GetTargetObject()->GetPosition() + mOffset;
-
World object relative:
-根据主物体和另一个物体之间的距离决定相机位置,这种方法可以帮助处理人物在相机和另一物体中间的情况
-
Mat3 rotation = Mat3::LookAt(GetTargetObject()->GetPosition().DropZ(), mWorldPosition.DropZ()); // note that the reference frame is often 2D, but is not required to be mOffset = rotation * offset; Vec3 desiredPosition = GetTargetObject()->GetPosition() + mOffset;
-
Local offset: 与world relative
-offset类似,只不过offset被转换到了局部空间
Mat3 rotation = mScriptObject->GetMatrix(); // based on the orientation of the script object mOffset = rotation * offset; Vec3 desiredPosition = GetTargetObject()->GetPosition() + mOffset;
-
Local angualar offset: 类似world-relative angular
-offset,不过offset被转换到了局部空间
Object-relative offset:
-不考虑目前物体的朝向,而仅考虑elevation和distance,the vector from the
-target object toward the current camera position defines the coordinate
-space used to calculate the desired position,相机的朝向通常由玩家操纵
-
The position of the player character relative to another object
-
Player position relative to a defined path
-
A specified distance away from the closest position on the path to
-the player: 改方法需要注意相机移动的平滑性
+
Post-Camera:更新依赖于相机的物体
+
Render:有时候一个camera view也会被用于另一个camera
+view的生成过程中
-
Splines. 可以用brute
-force的方法去计算spline的长度,代码如下
floatlength(0.0f); for (int i = 0; i < controlPoints.Size(); ++i) { Vec3 pathPosition = EvaluateSegment(i, 0.0f); // start of i'th segment controlPoints[i].mLength = length; // save length at the control point for (int j = 1; i < kMaxSegmentSlices; ++j) // includes next control point { Vec3 newPosition = EvaluateSegment(i, j/kMaxSegmentSlices); Vec3 delta = newPosition - pathPosition; length += delta.Magnitude(); // add the "length" of this slice pathPosition = newPosition; } } // length now holds the approximate total length
Aspect ratio: 长宽比,对于standard definition television
+(SDTV)来说,是4:3,对high-definition television
+(HDTV)是16:9。对电影投影中使用最多的是1.85 (CinemaScope)和2.35
+(anamorphic)
+
Refresh or frame rate: 刷新率
+
Refresh and motion blur:
+刷新模糊指显示设备刷新率不能与计算机的输出同步,运动模糊是因为物体运动快于曝光时间
Automated control over camera pitch when the player is jumping:
-从玩家起跳时开始相机可以向下看
-
Automated pitch control when traversing environmental features: This
-is applied to present a view facing up or down a ramp, staircase or
-other incline as appropriate so that players have a better view of what
-they are moving toward,
-在有洞穴或悬崖的地方,相机应该自动看向以给予提示
-
Automated pitch control during combat or interactions:
-自动调整pitch以提示可交互的物体
-
Automated reorientation of the player or a camera toward a target
-position: 锁定到目标物体
-
Repositioning and reorientation of the camera to face the same
-direction as the player character
-
Transitions from first to third person cameras:
-要保证当相机移动地充分远的时候才渲染物体,避免穿帮,此时可通过fade角色解决此问题
-
Transitions from third to first person cameras: 可通过cut
-transition实现
A control reference frame refers to a relationship between changes to
+the controller input and how it affects movement or other aspects of
+player control.
Ray casting:
-用射线决定是否存在遮挡物体,但是可能会有效率问题,此外,可以增加射线覆盖更大区域。另一种方法是使用ray-casting
-hysteresis,即在多个update中累积ray
-cast信息,并生成一个可能碰撞的概率图,从而减少每次更新的ray
-cast数量。Ray
-cast可以用于帮助相机移动。要对物体进行分类以区别哪些需要ray casting
-
For all ray casts if ray cast successful No influence applied if ray cast fails Scale influence factor by distance of ray cast collision from target Add influence to desired camera position End
stl::vector<Vec3> influence; // might be useful as a class for (int i = 0; i < colliders.size(); ++i) { influence.push_back(colliders[i].offset * colliders[i].GetWeighting()); // the weighting depends on line of sight and/or // other factors such as the relative collider position } // will need to get an average or similar returnGetCentriod(influence);
Keep an object within screen bounds or other arbitrary
+viewport-based constraint: 带有限制的移动
+
+
Interactive 3D Camera
+Systems
+
交互式3D相机系统的难点在于提供一个上下文合适的视角,包括艺术性与游戏性,特征包括:
-
Pre-defined scripting solutions for all ledge situations
-
Use pre-defined solutions for difficult to resolve cases, especially
-in confined spaces: 可以用stationary cameras或者spline paths
-
Teleport the camera to a new position if LOS fails due to the ground
-being between the camera and its target: 这可能需要检测剪玩家的高度
-
Use player movement hysteresis to provide an approximate path taken
-by the player character or target object
-
Interrogate the surrounding geometry to dynamically choose a path
-based on the last position at which the player was visible to the
-camera: 该位置给出了关于ledge的位置信息用于camera
-path,但要保证相机不能离物体太近,也不能让相机垂直朝向
Generate a prospective movement toward the desired position
-
Test to see if the prosepctive movement will cause collision
-(optional)
-
Resolve the collision (optional)
-
Generate and validate an alternative movement (optional if first
-move fails)
-
Move the camera to the new position
+
Attempt to keep the player character in view (3rd person
+cameras)
+
Prevent the camera passing through (or close to) game objects or
+physical environmental features
+
Do not require the player to manipulate the camera simply to play
+the game -- unless it is a design requirement
+
Allow camera manupulation when possible or dictated by game design
+requirements
+
Minimize unintentional camera motion whenever possible
+
Ensure camera motion is smooth
+
Limit the reorientation speed of the camera
+
Limited roll should be allowed in most regular game cameras
+
Do not allow the camera to pass outside the game world
+
Retain the camera position with respect to the player when instantly
+moving th camera to a new position (3rd person cameras)
+
Do not focus directly on the player character when it is moving
+
Retain control reference frame after rapid or instantaneous camera
+motion
+
Avoid enclosed spaces with complex geometry (3rd person
+cameras)
-
Character motion
-
首先看看人物移动会怎样影响第一人称和第三人称相机
+
Chapter 5: Camera Solutions
+
Game Genre Camera Solutions
-
First person cameras:
-注意相机位置通常不是眼睛位置,因此要注意fov和aspect
-ratio。有时候玩家跨越世界物体时可能造成traversal
-jitter,因此可以增加vertical damping
// A typical damping scheme for aviuding unwanted noise in the verticasl // motion of the camera is to simply limit the amount allowed per update. floatconstkMaxZMotionPerSecond(0.25f); // desired maximum motion float zMotion = newPosition.GetZ () - oldPosition.GetZ (); floatconst maxZMotion = kMaxZMotionPerSecond * deltaTime; zMotion = Math::Limit (zMotion, maxZMotion); newPosition.SetZ (oldPosition.GetZ () + zMotion);
// The smoothing is relatively harsh: the amount of vertical motion applied // is constant and thus will possibly have discontinuities. // An alternative is to use a critically damped using spring or PID controller. floatconst zMotion = newPosition.GetZ () - oldPosition.GetZ (); verticalSpring.SetLength (AbsF(zMotion)); // update the spring length // Here the target spring length is zero, and need not // be set each time. Typically the spring length is unsigned, // so that must be dealt with. floatconst newZMotion = verticalSpring.Update (deltaTime); if (zMotion > 0.0f) newPosition.SetZ (newPosition.GetZ() - newZMotion); else newPosition.SetZ (newPosition.GetZ() + newZMotion);
-
Third person cameras: 必须解决相机加速小于角色的问题(motion
-lag),和相机减速小于角色的问题(overshooting)
Vec3 const deltaPosition = desiredPosition - currentPosition; // the distance at which velocity damping is applied floatconstkDampDistance(5.0f); floatconst K = Math::Limit (deltaPosition.Magnitude() / kDampDistance, 1.0f); // Limit constant to 0..1 based on distance Vec3 const cameraVelocity = deltaPosition.AsNormalized () * K; Vec3 const newPosition = currentPosition + cameraVelocity * deltaTime;
// We can help solve the problem of the camera being left behind by adding a // portion of the target object's velocity into the original equation. Vec3 targetVelocity = (desiredPosition - previousDesiredPosition) / deltaTime; Vec3 = cameraVelocity = (deltaPosition.AsNormalized () * K * deltaTime) + (targetVeclocity * T);
// To have smooth motion we need to accelerate the camera over time // with a limiter to provide some degree of lag in the motion. floatconst acceleration = Math::Limit ( (desiredVelocity - currentVelocity), kMaxAcceleration); Vec3 const currentVelocity += acceleration * deltaTime; Vec3 const desiredPosition = currentPosition + (currentVelocity * deltaTime);
Racing games:
+可提供额外的视角以观测其他人的位置,增大FOV可以制造物体快速通过玩家的感觉
+
Ground vehicles
+
RTS
+
Flight simulation: 一个重要的方面是是否允许相机随着飞机一起roll
-
Motion damping: 可以通过添加vertical damping减少颠簸感
-
Motion filters: 用一个低通滤波过滤噪音
Vec3 const movementDelta = newPosition - currentPosition; if (movementDelta.Magnitude () > kMovementThreshold) { currentPosition = newPosition; } else { // ignoring small changes may cause the camera never to move at all! }
-下面是一种改进的方法(使用finite impulese response and infinite impulse
-response filters):
一个原则: > The player should be not REQUIRED to manipulate the
-camera simply to play the game, unless explicitly dictated by the game
-design.
-
If camera-relative, the sheer act of moving the camera changes the
-control reference frame and "pulls the rug out from under the player's
-feet". Sadly, some games seem to treat this behavior as acceptable. It
-is not. Changing the control reference frame without player knowledge is
-counter to good game play practice.
+
下面是对相关技术的介绍:
-
Maunipulation of the camera position:
-有时候允许玩家直接操纵相机,分为2D和3D相机
+
Single-screen techniques: 格斗游戏、合作游戏中常见
-
2D相机: 2D游戏通常会限制相机的移动以防看到游戏外的场景
-
3D相机: 在不同相机之间切换可以使用插值或jump cut
+
相机的位置很难确定
+
通常相机位置固定在一个离游戏世界表面相对的高度,而与玩家移动无关
+
相机位置可能导致朝向的迅速变化
-
Observer cameras: 提供观察者相机,即可以随意自由移动的相机
-
Camera motion validation: 一些游戏不允许操纵相机位置
-
Positional control constraints:
-Character-relative相机通常保持在玩家身后的一定区域内,一般不会让相机面朝角色。相机移动要尽可能平滑和慢以避免player
-disorientation
Game hints are script objects that provide one possible type of
+runtime mechanism by which designers can override player properties,
+game controls or camera properties according to events
-
Camera position
+
Camera hints: 改变相机的属性或移动,可以被重载的相机属性包括:
-
Offset within the local space of an object
-
Angular offsets relative to game objects
-
Valid position determination
-
Spline curves
+
相机行为
+
位置朝向
+
相对距离
+
看向点
+
移动速度
+
FOV
-
Camera orientation
+
Player hints
-
Representation
-
Look-at
-
Shortest rotation arc
-
Roll removal
-
Twist reduction
-
Determining angles between desired orientations
+
Prevent motion of the player
+
Relocate player position
+
Change player movement characteristics
+
Flags to indicate special kinds of game play
-
Camera motion
+
Control hints
-
Proximity
-
Damping functions
-
Springs
-
Interpolation
+
Disable specific controls or sets of player controls
+
Specify the control reference time and the time to interpolate to
+that new reference time
在三人称相机中,当物体在相机上方或者下方的时候,可能导致相机快速旋转。一个方法是检测相机绕着forward
-vector旋转的速度,但是只有在forward vector几乎在和world up
-axis平行的时候
-
World space to screen
-space conversion
-
取决于projection算法
+
一些经验法则:
-
Orthographic projection
uint32 const ScreenX = ObjectX - ViewportX; // assumes top left corner is (0,0) uint32 const ScreenY = ViewPortY - ObjectY; // depends on the direction of +Y
Vec3 deltaPosition = desiredPosition - currentPosition; Vec3 movementDirection = deltaPosition.AsNormalized (); float extension = deltaPosition.Magnitude (); float force = -kSpringConstant * extension; // for a unit mass, this is acceleration, F = ma Vec3 newVelocity = currentVelocity + (movementDirection * force); new Position = currentPosition + (currentVelocity * deltaTime);
History
-buffer中的每一项都有一个对应的滤波系数,对应了该项目的影响有多大,即改变对输入值的响应度
-
staticfloatconst firCoefficients[] = { // these values greatly influence the filter // response and may be adjusted accordingly 0.f, 0.f, 0.f, 1.f, 2.f, 3.f, 4.f }; floatCFIRFilter::Initialize(void) { // setup the coefficients for desired response for (int i = 0; i < mHistoryBuffer.size(); ++i) mCoefficients[i] = firCoefficients[i]; } floatCFIRFilter::Update(floatconst input) { // copy the entries in the history buffer up by one // position (i.e. lower entries are more recent) for (int i = mHistoryBuffer.size() - 2; i >= 0; --i) { // not efficient! Can use circular buffer mHistoryBuffer[i + 1] = mHistoryBuffer[i]; } mHistoryBuffer[0] = input; floatfir(0); // now accumulate the values from the history // buffer multiplied by the coefficients for (int i = 0; i > mHistoryBuffer.size(); ++i) { fir += mCoefficients[i] * mHistoryBuffer[i]; } return fir; }
-
Infinite impulse response
-
不同于FIR,IIR也加入了之前的输出值
-
floatCIIRFilter::Initialize(void) { mInputCoefficients[0] = 0.5f; mInputCoefficients[1] = 0.3f; mOutputCoefficients[0] = 0.5f; mOutputCoefficients[1] = 0.3f; } floatCIIRFilter::Update(floatconst input) { for (int i = mInputHistoryBuffer.size() - 2; i >= 0; --i) { // not efficient! Can use circular buffer mInputHistoryBuffer[i + 1] = mInputHistoryBuffer[i]; } mInputHistoryBuffer[0] = input; floatconst result = mInputCoefficients[0] * mInputHistoryBuffer[0] + mInputCoefficients[1] * mInputHistoryBuffer[1] + mOutputCoefficients[0] * mOutputHistoryBuffer[0] + mOutputCoefficients[1] * mOutputHistoryBuffer[1]; for (int i = mOutputHistoryBuffer.size() - 2; i >= 0; --i) { // not efficient! Can use circular buffer mOutputHistoryBuffer[i + 1] = mOutputHistoryBuffer[i]; } mOutputHistoryBuffer[0] = result; return result; }
// find velocities at b and c Vec3 const cb = c - b; if (!cb.IsNormalizable ()) return b; Vec3 ab = a - b; if (!ab.IsNormalizable ()) ab = Vec3 (0, 1, 0); Vec3 bVelocity = cb.AsNormalized () - ab.AsNormalized (); if (bVelocity.IsNormalizable ()) bVelocity.Normalize (); else bVelocity.Vec3 (0, 1, 0);
Vec3 dc = d - c; if (!dc.IsNormalizable ()) dc = Vec3 (0, 1, 0); Vec3 bc = -cb; Vec3 cVelocity = dc.AsNormalized () - bc.AsNormalized (); if (cVelocity.IsNormalizable ()) cVelocity.Normalize (); else bVelocity = Vec3 (0, 1, 0); floatconst cbDistance = cb.Magnitude (); returnCatmullRom (b, c, bVelocity * cbDistance, cVelocity * cbDistance, time);
Vec3 offset(0.0f, cosf(pitch) * distance, sinf(pitch) * distance); // construct a rotation matrix around the world up-axis Mat3 rotation = Mat3::ZRotation(yaw); mOffset = rotation * offset; Vec3 desiredPosition = GetTargetObject()->GetPosition() + mOffset;
+
World object relative:
+根据主物体和另一个物体之间的距离决定相机位置,这种方法可以帮助处理人物在相机和另一物体中间的情况
+
Mat3 rotation = Mat3::LookAt(GetTargetObject()->GetPosition().DropZ(), mWorldPosition.DropZ()); // note that the reference frame is often 2D, but is not required to be mOffset = rotation * offset; Vec3 desiredPosition = GetTargetObject()->GetPosition() + mOffset;
+
Local offset: 与world relative
+offset类似,只不过offset被转换到了局部空间
Mat3 rotation = mScriptObject->GetMatrix(); // based on the orientation of the script object mOffset = rotation * offset; Vec3 desiredPosition = GetTargetObject()->GetPosition() + mOffset;
+
Local angualar offset: 类似world-relative angular
+offset,不过offset被转换到了局部空间
Object-relative offset:
+不考虑目前物体的朝向,而仅考虑elevation和distance,the vector from the
+target object toward the current camera position defines the coordinate
+space used to calculate the desired position,相机的朝向通常由玩家操纵
+
Determine the nornal at each end of the segment and by using similar triangles Determine the length on the segment If length within 0..1 then Convert linear 0..1 into a parametric value With parametric value 0..1 find the position within the arc using the usual spline evaluation Check position against source position for LOS and other factors If OK, determine the physical distance between two points and compare to current "best" Else Not within segment, so proceed to next segment
floatlength(0.0f); for (int i = 0; i < controlPoints.Size(); ++i) { Vec3 pathPosition = EvaluateSegment(i, 0.0f); // start of i'th segment controlPoints[i].mLength = length; // save length at the control point for (int j = 1; i < kMaxSegmentSlices; ++j) // includes next control point { Vec3 newPosition = EvaluateSegment(i, j/kMaxSegmentSlices); Vec3 delta = newPosition - pathPosition; length += delta.Magnitude(); // add the "length" of this slice pathPosition = newPosition; } } // length now holds the approximate total length
A semi-random camera reorientation while the player character is
+idle
+
Automated orientation
+control
+
在没有玩家操作的情况下自动帮助相机转向:
-
Camera manager:
-跟踪所有的相机,传递输入、确保每个都得到正确更新
-
Object manager: 管理游戏中的每个对象,保证逻辑处理顺序正确
-
Message manager
-
Audio manager: 通过它控制音效的变化
-
Input manager: 将输入信息传递给所有需要的游戏对象
+
Automated control over camera pitch when the player is jumping:
+从玩家起跳时开始相机可以向下看
+
Automated pitch control when traversing environmental features: This
+is applied to present a view facing up or down a ramp, staircase or
+other incline as appropriate so that players have a better view of what
+they are moving toward,
+在有洞穴或悬崖的地方,相机应该自动看向以给予提示
+
Automated pitch control during combat or interactions:
+自动调整pitch以提示可交互的物体
+
Automated reorientation of the player or a camera toward a target
+position: 锁定到目标物体
+
Repositioning and reorientation of the camera to face the same
+direction as the player character
+
Transitions from first to third person cameras:
+要保证当相机移动地充分远的时候才渲染物体,避免穿帮,此时可通过fade角色解决此问题
+
Transitions from third to first person cameras: 可通过cut
+transition实现
Frustum culling of the player character:
+三人称游戏必须首先考虑frame玩家,切通常相机直接看向玩家前进的方向,但也要让玩家保持在view
+frustum中
-
Camera update loop
-
相机一般是下面的更新逻辑:
+
Chapter 8: Navigation and
+Occulusion
+
Dynamic determination of how the camera should reach its desired
+position is referred to here as navigation
+
The Cemera as an AI Game
+Object
+
需要假定环境是封闭的,即有碰撞表面
+
Navigation Techniques
+
相机的寻路需要考虑环境限制
+
Dynamic navigation
+techniques
+
主要包括以下方法:
-
Updating active cameras
-
Update camera scripting
-
Input processing
-
Pre-think logic
-
Think ordering
-
Cinematic camera deferral: 一般来说,cinematic cameras会在一个update
-loop的最开始start,而在update loop的最后结束
-
Post-think logic
-
Update audio logic
-
Debug camera
-
Render setup
+
Ray casting:
+用射线决定是否存在遮挡物体,但是可能会有效率问题,此外,可以增加射线覆盖更大区域。另一种方法是使用ray-casting
+hysteresis,即在多个update中累积ray
+cast信息,并生成一个可能碰撞的概率图,从而减少每次更新的ray
+cast数量。Ray
+cast可以用于帮助相机移动。要对物体进行分类以区别哪些需要ray casting
+
For all ray casts if ray cast successful No influence applied if ray cast fails Scale influence factor by distance of ray cast collision from target Add influence to desired camera position End
stl::vector<Vec3> influence; // might be useful as a class for (int i = 0; i < colliders.size(); ++i) { influence.push_back(colliders[i].offset * colliders[i].GetWeighting()); // the weighting depends on line of sight and/or // other factors such as the relative collider position } // will need to get an average or similar returnGetCentriod(influence);
Limit properties shown to those appropriate for behavior
-
Paths -- automatic waypoint dropping and connections
-
Volumes: this is the ability to define 3D regions for various
-camera-related functionality
-
Surfaces: defining surfaces for cameras to move on, for example
-
Links to target objects: identifying target objects for particular
-cameras
-
Control of position/orientation/roll/fov over time (spline
-editor)
-
Evaluation of target object or interpolant derivation over time:
-shows where an object will be located over time in case it has an impact
-on the camera behavior
+
Pre-defined scripting solutions for all ledge situations
+
Use pre-defined solutions for difficult to resolve cases, especially
+in confined spaces: 可以用stationary cameras或者spline paths
+
Teleport the camera to a new position if LOS fails due to the ground
+being between the camera and its target: 这可能需要检测剪玩家的高度
+
Use player movement hysteresis to provide an approximate path taken
+by the player character or target object
+
Interrogate the surrounding geometry to dynamically choose a path
+based on the last position at which the player was visible to the
+camera: 该位置给出了关于ledge的位置信息用于camera
+path,但要保证相机不能离物体太近,也不能让相机垂直朝向
Separate debugging camera: 维护一个单独的debug
-camera,只在开发环境中使用。当使用debug
-camera时,支持某些行为很有用,包括:把角色放置在当前相机位置、展示当前相机位置、从当前相机位置投射射线决定环境属性、暂停游戏运行允许操纵debug
-camera、捕捉当前渲染buffer并导出
-
Control of the update rate of the game:
-改变游戏的更新率,尤其是单词更新
-
General camera state: 跟踪一些相机的事项,包括:state information
-(active, interpolating, under player control, etc.), script messaging,
-changes to active camera hints/game cameras, occlusion state and the
-amount of time occluded, fail-safe activation, invalid camera properties
-including the validity of the transformation matrix
-
Visual representation of camera properties:
-可视化相机的一些属性,比如用wireframe sphere or
-cube展示相机移动,相机朝向、期望看向点、期望朝向
-
Property hysteresis: 有时候需要查看相机属性的历史记录,比如camera
-position (known as breadcrumbs), camera orientation (display orientation
-changes as points on the surface of a unit sphere)
-
Movement constraints: movement path drawing, based on spline curve
-evaluations and represented by an approximation of line segments;
-movement surface drawing
-
Line of sight:
-沿着forward画线,同时用颜色标记状态,比如红色表示受到阻挡,还可以显示阻挡物体的材质,比如stone,
-ceiling等等
Generate a prospective movement toward the desired position
+
Test to see if the prosepctive movement will cause collision
+(optional)
+
Resolve the collision (optional)
+
Generate and validate an alternative movement (optional if first
+move fails)
+
Move the camera to the new position
+
+
Character motion
+
首先看看人物移动会怎样影响第一人称和第三人称相机
-
通过运动或动作获得感官上的舒畅体验的过程称为感官体验。
+
First person cameras:
+注意相机位置通常不是眼睛位置,因此要注意fov和aspect
+ratio。有时候玩家跨越世界物体时可能造成traversal
+jitter,因此可以增加vertical damping
// A typical damping scheme for aviuding unwanted noise in the verticasl // motion of the camera is to simply limit the amount allowed per update. floatconstkMaxZMotionPerSecond(0.25f); // desired maximum motion float zMotion = newPosition.GetZ () - oldPosition.GetZ (); floatconst maxZMotion = kMaxZMotionPerSecond * deltaTime; zMotion = Math::Limit (zMotion, maxZMotion); newPosition.SetZ (oldPosition.GetZ () + zMotion);
// The smoothing is relatively harsh: the amount of vertical motion applied // is constant and thus will possibly have discontinuities. // An alternative is to use a critically damped using spring or PID controller. floatconst zMotion = newPosition.GetZ () - oldPosition.GetZ (); verticalSpring.SetLength (AbsF(zMotion)); // update the spring length // Here the target spring length is zero, and need not // be set each time. Typically the spring length is unsigned, // so that must be dealt with. floatconst newZMotion = verticalSpring.Update (deltaTime); if (zMotion > 0.0f) newPosition.SetZ (newPosition.GetZ() - newZMotion); else newPosition.SetZ (newPosition.GetZ() + newZMotion);
+
Third person cameras: 必须解决相机加速小于角色的问题(motion
+lag),和相机减速小于角色的问题(overshooting)
+
+
Movement methods
+
一些移动相机的方法包括:
+
+
Instantaneous motion: 直接把相机移动到期望的位置,但要注意:
+
+
可能也要改变朝向
+
可能需要检测新的位置是否离环境物体太近、没有在物体内
+
尽可能保持control reference frame
+
尽可能把所有该类型的变换定义为一个函数
+
如果出现多普勒效应,则应限制相机移动的最大速度
-
有趣秘密还在挑战中蕴含的风险与回报。
+
Locked
+
Proportional controller: 该方法在目标也移动的时候表现不好
+
Vec3 const deltaPosition = desiredPosition - currentPosition; // the distance at which velocity damping is applied floatconstkDampDistance(5.0f); floatconst K = Math::Limit (deltaPosition.Magnitude() / kDampDistance, 1.0f); // Limit constant to 0..1 based on distance Vec3 const cameraVelocity = deltaPosition.AsNormalized () * K; Vec3 const newPosition = currentPosition + cameraVelocity * deltaTime;
// We can help solve the problem of the camera being left behind by adding a // portion of the target object's velocity into the original equation. Vec3 targetVelocity = (desiredPosition - previousDesiredPosition) / deltaTime; Vec3 = cameraVelocity = (deltaPosition.AsNormalized () * K * deltaTime) + (targetVeclocity * T);
// To have smooth motion we need to accelerate the camera over time // with a limiter to provide some degree of lag in the motion. floatconst acceleration = Math::Limit ( (desiredVelocity - currentVelocity), kMaxAcceleration); Vec3 const currentVelocity += acceleration * deltaTime; Vec3 const desiredPosition = currentPosition + (currentVelocity * deltaTime);
Vec3 const movementDelta = newPosition - currentPosition; if (movementDelta.Magnitude () > kMovementThreshold) { currentPosition = newPosition; } else { // ignoring small changes may cause the camera never to move at all! }
+下面是一种改进的方法(使用finite impulese response and infinite impulse
+response filters):
一个原则: > The player should be not REQUIRED to manipulate the
+camera simply to play the game, unless explicitly dictated by the game
+design.
+
If camera-relative, the sheer act of moving the camera changes the
+control reference frame and "pulls the rug out from under the player's
+feet". Sadly, some games seem to treat this behavior as acceptable. It
+is not. Changing the control reference frame without player knowledge is
+counter to good game play practice.
在三人称相机中,当物体在相机上方或者下方的时候,可能导致相机快速旋转。一个方法是检测相机绕着forward
+vector旋转的速度,但是只有在forward vector几乎在和world up
+axis平行的时候
+
World space to screen
+space conversion
+
取决于projection算法
-
战神三在角色动作上的设计创造玩家与玩家角色融为一体的真实游戏体验。
+
Orthographic projection
uint32 const ScreenX = ObjectX - ViewportX; // assumes top left corner is (0,0) uint32 const ScreenY = ViewPortY - ObjectY; // depends on the direction of +Y
Vec3 deltaPosition = desiredPosition - currentPosition; Vec3 movementDirection = deltaPosition.AsNormalized (); float extension = deltaPosition.Magnitude (); float force = -kSpringConstant * extension; // for a unit mass, this is acceleration, F = ma Vec3 newVelocity = currentVelocity + (movementDirection * force); new Position = currentPosition + (currentVelocity * deltaTime);
History
+buffer中的每一项都有一个对应的滤波系数,对应了该项目的影响有多大,即改变对输入值的响应度
+
staticfloatconst firCoefficients[] = { // these values greatly influence the filter // response and may be adjusted accordingly 0.f, 0.f, 0.f, 1.f, 2.f, 3.f, 4.f }; floatCFIRFilter::Initialize(void) { // setup the coefficients for desired response for (int i = 0; i < mHistoryBuffer.size(); ++i) mCoefficients[i] = firCoefficients[i]; } floatCFIRFilter::Update(floatconst input) { // copy the entries in the history buffer up by one // position (i.e. lower entries are more recent) for (int i = mHistoryBuffer.size() - 2; i >= 0; --i) { // not efficient! Can use circular buffer mHistoryBuffer[i + 1] = mHistoryBuffer[i]; } mHistoryBuffer[0] = input; floatfir(0); // now accumulate the values from the history // buffer multiplied by the coefficients for (int i = 0; i > mHistoryBuffer.size(); ++i) { fir += mCoefficients[i] * mHistoryBuffer[i]; } return fir; }
+
Infinite impulse response
+
不同于FIR,IIR也加入了之前的输出值
+
floatCIIRFilter::Initialize(void) { mInputCoefficients[0] = 0.5f; mInputCoefficients[1] = 0.3f; mOutputCoefficients[0] = 0.5f; mOutputCoefficients[1] = 0.3f; } floatCIIRFilter::Update(floatconst input) { for (int i = mInputHistoryBuffer.size() - 2; i >= 0; --i) { // not efficient! Can use circular buffer mInputHistoryBuffer[i + 1] = mInputHistoryBuffer[i]; } mInputHistoryBuffer[0] = input; floatconst result = mInputCoefficients[0] * mInputHistoryBuffer[0] + mInputCoefficients[1] * mInputHistoryBuffer[1] + mOutputCoefficients[0] * mOutputHistoryBuffer[0] + mOutputCoefficients[1] * mOutputHistoryBuffer[1]; for (int i = mOutputHistoryBuffer.size() - 2; i >= 0; --i) { // not efficient! Can use circular buffer mOutputHistoryBuffer[i + 1] = mOutputHistoryBuffer[i]; } mOutputHistoryBuffer[0] = result; return result; }
// find velocities at b and c Vec3 const cb = c - b; if (!cb.IsNormalizable ()) return b; Vec3 ab = a - b; if (!ab.IsNormalizable ()) ab = Vec3 (0, 1, 0); Vec3 bVelocity = cb.AsNormalized () - ab.AsNormalized (); if (bVelocity.IsNormalizable ()) bVelocity.Normalize (); else bVelocity.Vec3 (0, 1, 0);
Vec3 dc = d - c; if (!dc.IsNormalizable ()) dc = Vec3 (0, 1, 0); Vec3 bc = -cb; Vec3 cVelocity = dc.AsNormalized () - bc.AsNormalized (); if (cVelocity.IsNormalizable ()) cVelocity.Normalize (); else bVelocity = Vec3 (0, 1, 0); floatconst cbDistance = cb.Magnitude (); returnCatmullRom (b, c, bVelocity * cbDistance, cVelocity * cbDistance, time);
Determine the nornal at each end of the segment and by using similar triangles Determine the length on the segment If length within 0..1 then Convert linear 0..1 into a parametric value With parametric value 0..1 find the position within the arc using the usual spline evaluation Check position against source position for LOS and other factors If OK, determine the physical distance between two points and compare to current "best" Else Not within segment, so proceed to next segment
大多数常用的属性都以易用的界面实现,比如camera path definition,
+camera hint placement,property editing, pre-defined macros of script
+objects:
-
在一般的动作游戏种,玩家需要根据周围敌人的情况选择固定的攻击动作或连击,然后输入指令来发动。
+
Placement/orientation of cameras
+
Camera property editing
+
View from the camera while manupulating
+
Editing in-game then transferring data back
+
Limit properties shown to those appropriate for behavior
+
Paths -- automatic waypoint dropping and connections
+
Volumes: this is the ability to define 3D regions for various
+camera-related functionality
+
Surfaces: defining surfaces for cameras to move on, for example
+
Links to target objects: identifying target objects for particular
+cameras
+
Control of position/orientation/roll/fov over time (spline
+editor)
+
Evaluation of target object or interpolant derivation over time:
+shows where an object will be located over time in case it has an impact
+on the camera behavior
+
+
Camera collision mesh
+
Camera有自己的collision geometry会更方便,因为可以允许动态改变
+
Camera Debugging Techniques
+
Interactive debugging
+
Interactive debugging包括:
-
要先考虑与敌人的距离、招式的速度、招式的追踪性能、招式的属性。
-
-
自由格斗,玩家输入的不是攻击招式,而是适合周围状况的攻击动作。
-
一个按键对应多个攻击招式以及敌人反应。
-
蝙蝠侠仍然是一款根据环境自动选择动作的游戏。
+
Internal property interrogation: 直接看相机数据
+
Separate debugging camera: 维护一个单独的debug
+camera,只在开发环境中使用。当使用debug
+camera时,支持某些行为很有用,包括:把角色放置在当前相机位置、展示当前相机位置、从当前相机位置投射射线决定环境属性、暂停游戏运行允许操纵debug
+camera、捕捉当前渲染buffer并导出
+
Control of the update rate of the game:
+改变游戏的更新率,尤其是单词更新
+
General camera state: 跟踪一些相机的事项,包括:state information
+(active, interpolating, under player control, etc.), script messaging,
+changes to active camera hints/game cameras, occlusion state and the
+amount of time occluded, fail-safe activation, invalid camera properties
+including the validity of the transformation matrix
+
Visual representation of camera properties:
+可视化相机的一些属性,比如用wireframe sphere or
+cube展示相机移动,相机朝向、期望看向点、期望朝向
+
Property hysteresis: 有时候需要查看相机属性的历史记录,比如camera
+position (known as breadcrumbs), camera orientation (display orientation
+changes as points on the surface of a unit sphere)
+
Movement constraints: movement path drawing, based on spline curve
+evaluations and represented by an approximation of line segments;
+movement surface drawing
+
Line of sight:
+沿着forward画线,同时用颜色标记状态,比如红色表示受到阻挡,还可以显示阻挡物体的材质,比如stone,
+ceiling等等
// rotation using vector decomposation - the vector form voidDecomposeVector(const Eigen::Vector3f &n, const Eigen::Vector3f &p, float angle){ double rotationAngle = angle / 180.0 * PI;
auto startTime = std::chrono::high_resolution_clock::now();
Eigen::Vector3f rotatedVector = cos(rotationAngle) * p + (1 - cos(rotationAngle)) * (n.dot(p)) * n + sin(rotationAngle) * (n.cross(p));
auto endTime = std::chrono::high_resolution_clock::now(); double deltaTime = std::chrono::duration<double, std::milli>(endTime-startTime).count();
cout << "Method: vector decomposition - the vector form. The rotated vector p' is (" << rotatedVector(0) << "," << rotatedVector(1) << "," << rotatedVector(2) << "). The time used is " << deltaTime << endl; }
// rotation using vector decomposation - the matrix form voidDecomposeMatrix(const Eigen::Vector3f &n, const Eigen::Vector3f &p, float angle){ double rotationAngle = angle / 180.0 * PI;
auto startTime = std::chrono::high_resolution_clock::now();
Eigen::Matrix3f N = Eigen::Matrix3f::Identity(); N << 0, -n(2), n(1), n(2), 0, -n(0), -n(1), n(0), 0; Eigen::Matrix3f R = Eigen::Matrix3f::Identity() + sin(rotationAngle) * N + (1 - cos(rotationAngle)) * N * N; Eigen::Vector3f rotatedVector = R * p;
auto endTime = std::chrono::high_resolution_clock::now(); double deltaTime = std::chrono::duration<double, std::milli>(endTime-startTime).count();
cout << "Method: vector decomposition - the matrix form. The rotated vector p' is (" << rotatedVector(0) << "," << rotatedVector(1) << "," << rotatedVector(2) << "). The time used is " << deltaTime << endl; }
auto startTime = std::chrono::high_resolution_clock::now();
Eigen::Vector3f crossed = n.cross(p); Eigen::Vector3f u = n; Eigen::Vector3f v = crossed / crossed.norm(); Eigen::Vector3f w = n.cross(v); Eigen::Matrix3f Q = Eigen::Matrix3f::Identity(), T = Eigen::Matrix3f::Identity(); Q.row(0) = u; Q.row(1) = v; Q.row(2) = w; T << 1, 0, 0, 0, cos(rotationAngle), -sin(rotationAngle), 0, sin(rotationAngle), cos(rotationAngle); Eigen::Vector3f rotatedVector = Q.transpose() * T * Q * p;
auto endTime = std::chrono::high_resolution_clock::now(); double deltaTime = std::chrono::duration<double, std::milli>(endTime-startTime).count();
cout << "Method: axis coordination. The rotated vector p' is (" << rotatedVector(0) << "," << rotatedVector(1) << "," << rotatedVector(2) << "). The time used is " << deltaTime << endl; }
The vector p is(1,2,3). The rotation axis n is(0.213724,0.919011,-0.331271). Method: vector decomposition - the vector form. The rotated vector p' is (3.57449,0.643966,0.899062). The time used is 0.016354 Method: vector decomposition - the matrix form. The rotated vector p' is (3.57449,0.643966,0.899062). The time used is 0.016642 Method: axis coordination. The rotated vector p' is (3.57449,0.643966,0.899062). The time used is 0.021062
再多试几组:
-
The vector p is(1,-654.1,12.88). The rotation axis n is(0.995044,0.0136933,-0.0984901). Method: vector decomposition - the vector form. The rotated vector p' is (-59.7309,-338.298,-556.777). The time used is 0.015503 Method: vector decomposition - the matrix form. The rotated vector p' is (-59.7309,-338.298,-556.777). The time used is 0.015297 Method: axis coordination. The rotated vector p' is (-59.7309,-338.298,-556.777). The time used is 0.021547
-
The vector p is(32.5,45.1,-2.2). The rotation axis n is(0.57735,0.57735,0.57735). Method: vector decomposition - the vector form. The rotated vector p' is (5.16667,52.4667,17.7667). The time used is 0.015558 Method: vector decomposition - the matrix form. The rotated vector p' is (5.16667,52.4667,17.7667). The time used is 0.015215 Method: axis coordination. The rotated vector p' is (5.16667,52.4667,17.7667). The time used is 0.020841
-
The vector p is(666,0,0). The rotation axis n is(0,0,1). Method: vector decomposition - the vector form. The rotated vector p' is (1.78454e-05,666,0). The time used is 0.01517 Method: vector decomposition - the matrix form. The rotated vector p' is (0,666,0). The time used is 0.015402 Method: axis coordination. The rotated vector p' is (1.78454e-05,666,0). The time used is 0.020898
-
最后一组出现了精度问题,所以代码中还应该加入判断
if(abs(value - round(value)) < epsilon) value = round(value);
// rotation using vector decomposation - the vector form voidDecomposeVector(const Eigen::Vector3f &n, const Eigen::Vector3f &p, float angle){ double rotationAngle = angle / 180.0 * PI;
auto startTime = std::chrono::high_resolution_clock::now();
Eigen::Vector3f rotatedVector = cos(rotationAngle) * p + (1 - cos(rotationAngle)) * (n.dot(p)) * n + sin(rotationAngle) * (n.cross(p));
auto endTime = std::chrono::high_resolution_clock::now(); double deltaTime = std::chrono::duration<double, std::milli>(endTime-startTime).count();
cout << "Method: vector decomposition - the vector form. The rotated vector p' is (" << rotatedVector(0) << "," << rotatedVector(1) << "," << rotatedVector(2) << "). The time used is " << deltaTime << endl; }
// rotation using vector decomposation - the matrix form voidDecomposeMatrix(const Eigen::Vector3f &n, const Eigen::Vector3f &p, float angle){ double rotationAngle = angle / 180.0 * PI;
auto startTime = std::chrono::high_resolution_clock::now();
Eigen::Matrix3f N = Eigen::Matrix3f::Identity(); N << 0, -n(2), n(1), n(2), 0, -n(0), -n(1), n(0), 0; Eigen::Matrix3f R = Eigen::Matrix3f::Identity() + sin(rotationAngle) * N + (1 - cos(rotationAngle)) * N * N; Eigen::Vector3f rotatedVector = R * p;
auto endTime = std::chrono::high_resolution_clock::now(); double deltaTime = std::chrono::duration<double, std::milli>(endTime-startTime).count();
cout << "Method: vector decomposition - the matrix form. The rotated vector p' is (" << rotatedVector(0) << "," << rotatedVector(1) << "," << rotatedVector(2) << "). The time used is " << deltaTime << endl; }
auto startTime = std::chrono::high_resolution_clock::now();
Eigen::Vector3f crossed = n.cross(p); Eigen::Vector3f u = n; Eigen::Vector3f v = crossed / crossed.norm(); Eigen::Vector3f w = n.cross(v); Eigen::Matrix3f Q = Eigen::Matrix3f::Identity(), T = Eigen::Matrix3f::Identity(); Q.row(0) = u; Q.row(1) = v; Q.row(2) = w; T << 1, 0, 0, 0, cos(rotationAngle), -sin(rotationAngle), 0, sin(rotationAngle), cos(rotationAngle); Eigen::Vector3f rotatedVector = Q.transpose() * T * Q * p;
auto endTime = std::chrono::high_resolution_clock::now(); double deltaTime = std::chrono::duration<double, std::milli>(endTime-startTime).count();
cout << "Method: axis coordination. The rotated vector p' is (" << rotatedVector(0) << "," << rotatedVector(1) << "," << rotatedVector(2) << "). The time used is " << deltaTime << endl; }
The vector p is(1,2,3). The rotation axis n is(0.213724,0.919011,-0.331271). Method: vector decomposition - the vector form. The rotated vector p' is (3.57449,0.643966,0.899062). The time used is 0.016354 Method: vector decomposition - the matrix form. The rotated vector p' is (3.57449,0.643966,0.899062). The time used is 0.016642 Method: axis coordination. The rotated vector p' is (3.57449,0.643966,0.899062). The time used is 0.021062
再多试几组:
+
The vector p is(1,-654.1,12.88). The rotation axis n is(0.995044,0.0136933,-0.0984901). Method: vector decomposition - the vector form. The rotated vector p' is (-59.7309,-338.298,-556.777). The time used is 0.015503 Method: vector decomposition - the matrix form. The rotated vector p' is (-59.7309,-338.298,-556.777). The time used is 0.015297 Method: axis coordination. The rotated vector p' is (-59.7309,-338.298,-556.777). The time used is 0.021547
+
The vector p is(32.5,45.1,-2.2). The rotation axis n is(0.57735,0.57735,0.57735). Method: vector decomposition - the vector form. The rotated vector p' is (5.16667,52.4667,17.7667). The time used is 0.015558 Method: vector decomposition - the matrix form. The rotated vector p' is (5.16667,52.4667,17.7667). The time used is 0.015215 Method: axis coordination. The rotated vector p' is (5.16667,52.4667,17.7667). The time used is 0.020841
+
The vector p is(666,0,0). The rotation axis n is(0,0,1). Method: vector decomposition - the vector form. The rotated vector p' is (1.78454e-05,666,0). The time used is 0.01517 Method: vector decomposition - the matrix form. The rotated vector p' is (0,666,0). The time used is 0.015402 Method: axis coordination. The rotated vector p' is (1.78454e-05,666,0). The time used is 0.020898
+
最后一组出现了精度问题,所以代码中还应该加入判断
if(abs(value - round(value)) < epsilon) value = round(value);