One possible interpretation of the basic phenomenological philosophy is that

From its beginnings there has been a series of critical objections against the phenomenology of religion.

(a)

Especially the phenomenological approach of Eliade is the target of Wagner's (1986) criticism who objects that his concept of the homo religiosus is guided by a notion of ‘natural religion.’ He argues that it presupposes some unwarranted knowledge about the religious situation in what Eliade considers to be ‘archaic cultures.’

(b)

With respect to religion, phenomenology of religion takes a decidedly substantialist position (Luckmann 1983). Moreover, critics charge that this substantialism is based on nonempirical, extraphenomenological, and theological assumptions and intentions enter into the analyses of such classic representants of phenomenology of religion such as Kristensen, Eliade, or Van der Leeuw.

(c)

To many critiques, this is due to the fact that methods are rarely unveiled. Despite the reference to phenomenological methods, the basic presuppositions often seem arbitrary, and also theoretical reflections are criticized to fall short of the diligent collection and classification of data.

(d)

Since phenomenology provides the basic methodology for phenomenology of religion, it is particularly consequential that phenomenology of religion lost contact with the developments in philosophical phenomenology from about the 1950s. As a result, phenomenology of religion rarely is considered of importance in the analysis of religion within the tradition of phenomenological philosophy (Guerrière 1990).

(e)

The phenomenology of religion has been criticized for ignoring the social and cultural contexts of religious phenomena. Moreover, as phenomenology in general has been criticized for its naive attitude towards language and cultural perspectivism, also the phenomenology of religion is subject to criticism as to the linguistic and cultural bias implicit in the analysis of ‘phenomena’ and ‘symbols.’ Thus, the results of free variations depend on the phenomenologists' cultural background (Allen 1987).

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767040596

Personality Theories

W. Mischel, R. Mendoza-Denton, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Phenomenological Approaches

In the middle of the twentieth century, phenomenological approaches arose, in part, as a humanistic protest against the earlier psychodynamic and behavioristic views. Phenomenologically oriented theorists argued that personality is not merely passively molded by internal motivational or external situational forces that ‘shape’ what the person becomes. Instead, people are active agents in the world and have a measure of control over their environment and their own lives. In this view, people are considered capable of knowing themselves and of being their own best experts. Self-knowledge and self-awareness become the route to discovering one's personality and genuine self.

Phenomenological approaches to personality (sometimes called self theories, construct theories, and humanistic theories), tend to reject many of the motivational concepts of psychodynamic theories and most of the environmental determinism of behavioral theories. Instead, their focus is on the development of an active ‘self’: People develop self-concepts and goals that guide their choices and their life course. Understanding personality, as well as the person's goals and choices, requires attention to how the individual characteristically perceives, thinks, interprets, and experiences or even ‘constructs’ the personal world.

George Kelly's (1905–67) theory of personal constructs, for example, emphasized people's subjective perceptions as the determinants of their behavior. Kelly believed that, just like scientists, people generate constructs and hypotheses both about themselves and about how the world works; they use these constructs to anticipate, understand, and control events in their lives. Therefore to understand people, one has to understand their constructs, or personal theories. Problems develop when the constructs people generate don't work well for them, when they are ‘bad scientists’ and fail to ‘test’ their constructs or hypotheses against the realities of the environment, or when they see themselves as helpless victims of their own personalities or life situations. Kelly's principle of ‘constructive alternativism’ held that all events in the world, including one's own behavior and characteristics, can be construed in multiple, alternative ways. While it is not always possible to change these events, one can always construe them differently, thus influencing how one is affected by them and how one reacts to them.

Carl Rogers (1902–87), another pioneer of the phenomenological approach, proposed two systems: the organism and the self (or self-concept). The organism is the locus of all experience, which includes everything potentially available for awareness. The self is that portion of the perceptual field that is composed of perceptions of characteristics of the ‘I’ or the ‘me.’ It develops from experiences and interactions with the environment, and also shows a tendency towards actualization. Rogers maintained that the central force in the human organism is the tendency to actualize itself—to move constructively in the direction of fulfillment and enhancement. The self may be in opposition or in harmony with the organism. When the self is in opposition or incongruence with the experiences of the organism (e.g., when the self tries to be what others want it to be instead of what it really is), the person may become anxious, defensive, and rigid. However, when the self is open and accepting of all of the organism's experiences without threat or anxiety, the person is genuinely psychologically adjusted, for the self and the organism are one.

In contemporary work, the ‘self’ is seen as multifaceted and dynamic, consisting of multiple self-concepts that encode different aspects of the person (e.g., self as lover, self as father, the ‘ideal’ self, the ‘actual’ self) and become differentially salient depending on context (Markus and Nurius 1986). According to Higgins (1987) for example, a perceived discrepancy between the mental representation of the person one would ideally like to be (the ideal self) and the representation of who one actually is (the actual self) makes one more vulnerable to feelings of dejection, such as disappointment or dissatisfaction. In contrast, a discrepancy between one's representation of who one ought to be (the ought self) and the actual self can lead to feelings of agitation such as fear and worry. Motivation for behavior change arises from the conflicts each individual feels among his or her various representations of the self. For instance, upon receiving a low grade on an exam, an undergraduate may subsequently study very hard to relieve the guilt of not living up to what she herself perceives to be her responsibility as an exemplary student. Alternatively, she may re-evaluate her negative interpretation of past events, thinking about all of the good grades she has got in other classes and the myriad of other activities she is involved in (see Personality and Conceptions of the Self).

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767016478

Variational Principles for the Linearly Damped flow of Barotropic and Madelung-Type Fluids

Heinz-Jürgen Wagner, in Variational and Extremum Principles in Macroscopic Systems, 2005

2.1 Phenomenological and microscopic approaches

The strategies for the inclusion of dissipative effects within quantum theory can roughly be divided into two subclasses. The ‘phenomenological approach’ tries to approximate the interaction of a particle with its environment by condensing these contributions into phenomenological (mostly one-parameter) extensions of the ordinary one-particle Schrödinger equation. Such equations have often been employed for the description of various frictional effects in quantum systems including e.g. inelastic nucleon–nucleon scattering, diffusion of interstitial impurity atoms, radiation damping, and dissipative tunneling. Surveys of the whole field can be found e.g. in Refs. [10–12].

For a more satisfactory theory of quantum dissipation one has to consider the whole system of particle and environment and subsequently try to eliminate the environmental degrees of freedom by appropriate averaging processes. We, however, shall not deal with this so-called ‘microscopic approach’ here. For work in this direction the reader may consult e.g. Refs. [13–15].

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978008044488850014X

Schütz, Alfred (1899–1959)

I. Srubar, in International Encyclopedia of the Social & Behavioral Sciences, 2001

Alfred Schütz, born in Vienna, emigrated in 1938 via Paris to New York. He was founder of the phenomenological approach in sociology, which is one of the main paradigms in the interpretative social sciences. Influenced by Max Weber, Henri Bergson, Edmund Husserl, the Austrian School of Economics, and pragmatism, he formulated a theory of the life world and its structures showing how actors produce and understand social reality in everyday interaction and communication. His concepts initiated inquiry into the everyday life of societies (sociology of everyday life) and were crucial for the origin and further development of numerous sociological disciplines (ethnomethodology, cognitive sociology, sociology of knowledge, sociology of language). The findings stemming from his theory of the life world entered into the mainstream of sociological theory and his methodological suggestions sparked innovations in the field of qualitative methods of social research. Beyond sociology, his ideas were conceived predominantly in the fields of philosophy, education, social geography, and politics.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767003338

Advances in Geophysics

Wojciech De¸bski, in Advances in Geophysics, 2010

1 Introduction

There are many ways to approach inverse problems, ranging from a mostly qualitative description (Hjelt, 1992), the phenomenological approach of Parker (1994), the easy-to-read book by Menke (1989), the teaching-orientated approach (Gubbins, 2004; Aster et al., 2005), a mathematically rigorous approach (see, e.g., Kirsch, 1996; Zhdanov, 2002), to the probabilistic approach of Tarantola (1987, 2005). For a long time inverse problems were understood as the task of estimating parameters used to describe studied structures or processes. They were traditionally solved by using optimization techniques following the least absolute value or least squares criteria (Limes and Treitel, 1983; Tarantola, 1987; Menke, 1989; Aster et al., 2005). Reformulating the inverse theory in probabilistic language and setting it up within a more general inference context has opened up a new area of application which, for the purpose of this review, is referred to as nonparametric inversion. Nonparametric inverse problems like optimum parameterization (Malinverno, 2000), forward modeling and parameterization selection (De¸bski and Tarantola, 1995; Bodin and Sambridge, 2009; Gallagher et al., 2009), resolution and parameter correlation analysis (De¸bski, 1997a; De¸bski et al., 1997; Wiejacz and De¸bski, 2001), and nonuniqueness analysis (Deal and Nolet, 1996b; Vasco et al., 1996), to name a few, were even difficult to formulate within the classic approach. With the advent of the probabilistic approach, they are attracting more and more attention as these problems are often quite important.

The classically understood inverse theory faces a new challenge in its development. In many applications, obtaining an optimum “best fitting” model according to a selected optimization criterion is no longer sufficient. We need to know how plausible the obtained model is or, in other words, how large the uncertainties are in the final solutions (Scales, 1996; Sen and Stoffa, 1996; Banks and Bihari, 2001; Scales and Tenorio, 2001; Malinverno, 2002; De¸bski, 2004). Actually, the necessity of estimating the inversion uncertainties within the parameter estimation class of inverse problems is one of the most important requirements imposed on any modern inverse theory. It can only partially be fulfilled within the classical approaches. For example, assuming Gaussian-type inversion errors, inversion uncertainty analysis can, in principle, be performed for linear inverse problems (see, e.g., Duijndam, 1988b; Parker, 1994; Zhdanov, 2002), although in the case of large inverse tasks like seismic tomography it can be quite difficult (Nolet et al., 1999; Yao et al., 1999). On the other hand, in the case of nonlinear tasks a comprehensive evaluation of the inversion errors is usually impossible. In such a case only a linearization of the inverse problem around the optimum model allows the inversion errors to be estimated, provided that the original nonlinearity does not lead to multiple solutions, null space, etc. (see, e.g., Menke, 1989; Parker, 1994; De¸bski, 2004). Probabilistic technique offering a very general, flexible, and unified approach outperforms any classical inversion technique in such applications. Taking into account also the above-mentioned fact that the probabilistic inverse theory provides an efficient tool for solving nonparametric inverse tasks, we can conclude that the probabilistic approach is currently the most powerful inversion technique.

Solving nonparametric inverse problems or estimating the inversion uncertainties requires not only finding the optimum model but also inspecting its neighborhood to find out how “large” the region of plausible models is. This observation opens a natural link between the inverse problem and the measure and probability theories which provide the mathematical framework for such a quantitative evaluation. As a result of a combination of both techniques, the probabilistic inverse theory emerged in the early 1970s (Kimeldorf and Wahba, 1970; Tarantola, 1987; Sambridge and Mosegaard, 2002).

The probabilistic inverse theory incorporates an informative point of view according to which the solution to the inverse task actually relies on combining together all possessed information on the studied object or system, no matter how that information is obtained. Thus, information coming from an experiment (data), theoretical prediction (relation between model parameters and measured data), or any additional a priori knowledge is treated on the same footing. Of course, in this kind of reasoning the solution of the inverse problem is also a kind of a posteriori information. One very important aspect of the probabilistic inverse theory is the representation of the handled information by appropriate mathematical objects. Following the long tradition of theoretical statistics (Box and Tiao, 1973; Jaynes, 1986; Carlin and Louis, 1996; Gelman et al., 1997), it has been proposed to describe any piece of information by a probability distribution (Tarantola and Vallete, 1982; Jaynes, 1988; Tarantola, 2005). Thus, the solution of the inverse problem according to this approach is the a posteriori probability distribution representing the a posteriori knowledge rather than a single, optimum model as is the case in the traditional approach.

Today, the probabilistic inverse theory is not a closed theory but is developing continuously. It was recently found, for example, that it is strongly linked to statistical (Bayesian) inference, on the one hand, and differential and algebraic geometry, on the other (De¸bski, 2004). The consequences of this newly discovered link to abstract geometry have not been explored yet. Another example of theoretical development is a very interesting analysis of symmetries and invariants of inverse tasks by methods of the group theory proposed by Vasco (2007). Recently, an attempt to incorporate the optimum model selection task (Sambridge et al., 2006; Bodin and Sambridge, 2009) and a priori information in forward modeling operations (De¸bski, 2004) has been addressed within the probabilistic approach.

Comprehensive use of the probabilistic approach requires an efficient sampling method of usually multidimensional a posteriori probabilities. In most practical cases this can be done only by Monte Carlo (MC) numerical techniques, among which the Markov Chain Monte Carlo (MCMC) technique is the most promising (see, e.g., Robert and Casella, 1999). This approach, based on a simulation of Markovian stochastic processes, is flexible enough to accommodate the complicated requirements of geophysical inverse problems.

The goal of this chapter is twofold. Firstly, I want to present the probabilistic inverse theory at its current development stage, explaining and illustrating some theoretical aspects of the theory which can cause problems in real applications. I hope to make this approach even more widely known among practitioners dealing with geophysical inversion in everyday practice. Secondly, by collecting and briefly commenting on some of the most recent inverse cases solved using the probabilistic technique, I want to show the method's flexibility and illustrate what type of problems can be treated by this very powerful inversion technique.

This chapter is divided into three main units. The first part begins with a short description of various inverse tasks. Next the focus shifts to classic parameter estimation problems and the classic approaches are compared to the probabilistic technique. This part ends with a discussion of the pros and cons of using the probabilistic inversion technique for parameter estimation tasks. The second part deals with the mathematical aspects of the probabilistic approach. Some elements of the MC technique important for inverse problems are also presented here. Next, an exhaustive review of geophysical applications of probabilistic inversion completes the chapter. Finally, following the advice of one of the reviewers I have added a short compendium of the literature which fully explains the topics discussed in this chapter and which can be a good starting point for those not yet familiar with inverse problems.

Concluding the introduction, I need to explain the nomenclature used in this chapter. In the literature, the probabilistic approach to inverse problems is commonly referred to as the Bayesian technique. This follows from the long statistical tradition and the fact that the first attempts at solving inverse problems in the spirit of statistical reasoning were based on a particular interpretation of the Bayes theorem (Tarantola, 1987). However, in my opinion the intense theoretical development of the method and new advanced numerical techniques used in its context lead to further underlining of the “probabilistic” elements of the theory, shifting it away from the initial “Bayesian” form. The name Bayesian inversion was sometimes applied to the specific optimization-based inversion technique which used the idea of the “a priori” constraint to regularize the final solution (Jackson, 1979; Menke, 1989; Gibowicz and Kijko, 1994). Moreover, the term Bayesian inversion was and still is often used to denote the maximum a posteriori (MAP) approach (DeGroot, 1970; Sorenson, 1980), which, although based on the a posteriori probability distribution as the main element of the technique, is actually a kind of optimization technique because it provides a single model as the solution and does not sample the entire a posteriori distribution as the full probabilistic approach does. For these reasons, to avoid any misunderstanding I use the term probabilistic inverse theory throughout this chapter.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065268710520016

STATIC AND DYNAMIC PROPERTIES OF LOOPLESS AGGREGATES

S. HAVLIN, in Fractals in Physics, 1986

4 DIFFUSION MODEL FOR LOOPLESS AGGREGATES

Earlier studies of the anomalous diffusion on general fractal aggregates and treelike structures were usually based on the Einstein relation involving conductivity to calculate the exponents of the anomalous diffusion. In this chapter we introduce a phenomenological approach that allows calculation of several diffusion properties on trees directly from a diffusion model.

Let us therefore consider a loopless tree characterized by a vertex or origin, an infinite branched skeleton, and finite dead ends that branch from the skeleton. For simplicity, the resulting structure will be assumed initially to be discretized in units of Δl. We will later pass to the continuum limit. The random walk is chosen to be restricted to nearest neighbors. Thus a random walker at l can move in a single step only to l±Δl on the tree with probabilities denoted by p±(l). These will not in general be equal, nor will they sum to one. It will be necessary to define a probability for pausing at any given step equal to po(l) = 1-p+(l)-p-(l). In order to derive expressions for these probabilities we use the quantities that characterize the tree B(l) and BS(l). The probabilities Po(l) and p±(l) are related to these quantities by

(4.1)po(l)=1−p+(l)−p−(l)=1−BS(l)/B(l)

(4.2)p+(l)/p−(l)=BS(l+1)/BS(l).

The first of these relations indicates that the random walker pauses in his progress along the skeleton whenever he finds himself on a dead-end. The second indicates that the relative probabilities of a step in the forward or backward directions along the skeleton depend on the relative number of bonds allowing motion forwards and backwards.

Equations (4.1) and (4.2) together with known scaling properties of B(l) ∼ ldl-1 and BS(l) ∼ ldSl−1 at large l, allows us to find asymptotic expressions

(4.3)p±(l)=A2lα(1±B2l),po(l)=1−Alα

where A is a constant related to the proportionality factors in the asymptotic expressions for M(l) and MS(l), and the parameters α and B are related to dl and dsl by

(4.4)α=dl−dls,B=dlS−1

Equation (4.4) implies that as the random walker moves further from the origin, he is increasingly likely to remain stationary. This is reasonable because the random walker is increasingly likely to be caught in a dead-end as he moves away from the origin. The terms in the parentheses in Eq. (4.3) stem from the fact that when the skeleton branches, or dsl > 1, the random walk, is biassed in the direction of the more richly branched section.

The assumption that the random walker moves only to nearest neighbors allows one to write a recursion relation for the state probabilities (Un(l)} at step n:

(4.5)Un+1(l)=p+(l−Δl)Un(l−Δl)+p−(l+Δl)Un(l+Δl)+po(l)Un(l)

Numerical solutions of Eq. (4.5) have shown that the resulting diffusion equation

(4.6)∂U∂n=A2∂2∂l2(U)lα−AB2∂∂l(Ulα+1)

leads to results in good agreement with the solutions of the difference equation (4.5)

The solution to Eq. (4.6) that satisfies the initial condition U(l,0) = δ(l) is found to be

(4.7)Un(l)=(λ/n)(1+α+B)/(2+α)l a+BΓ((1+α+B)/(2+α)).exp(−λl2+αn)

in which λ = 2/[A(2+α)2]. The expression for Un(l) allows us to deduce relations between the various exponents. The exponent dlw is readily found by calculating σ2(l) from Eq. (4.7). We find that the time dependence of this parameter is

(4.8)σ2(l)=2=cn1/(2+α)

where C is a constant. This equation implies that dlw = 2 + α = 2 + dl - dSl. Furthermore, the behavior of U for large but fixed l and n → ∞ goes like n−(1 + α + B)/(2 + α) which implies that the fracton dimension is

(4.9)d==2(1+α+B)/(2+α)=2dl/(dl−dls+2)

The exponents dlw and d=are in agreement with those derived in Ch. 3.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444869951500652

Phenomenology

Shadd Maruna, Michelle Butler, in Encyclopedia of Social Measurement, 2005

Why Phenomenology?

There are numerous reasons why introspective descriptions of individuals' conscious, inner worlds might be interesting and important to social scientists. Most pragmatically, understanding how individuals perceive the social world may help social scientists better explain and predict their behavior. Fundamental to the phenomenological approach is the symbolic interactionist truism, “If [people] define situations as real, they are real in their consequences.” In the original passage containing this famous phrase, Thomas and Thomas are attempting to account for the unusual behavior of a convict at Dannemora prison who had murdered several persons because they “had the unfortunate habit” of talking to themselves on the street. They write, “From the movement on their lips he imagined that they were calling him vile names, and he behaved as if this were true.” Even beyond such extreme cases, phenomenologists argue that “each individual extracts a subjective psychological meaning from the objective surroundings and that subjective environment shapes both personality and subsequent interaction” (see Caspi and Moffitt).

Phenomenology's usefulness in enhancing the predictive powers of causal explanation is not its only strength, however. Phenomenological theorists suggest that reaching verstehen (or phenomenological understanding) is “the key to understanding what is unique about the human sciences” (see Schwandt). Victor Frankl, for instance, argues that the human being's search for meaning is “a primary force in his life and not a “secondary rationalisation” of instinctual drives.” Arguing against the determinism of positivist social science, phenomenologists like Frankl argue that human beings are ultimately self-determining. “Man does not simply exist but always decides what his existence will be, what he will become in the next moment.” As such, from this perspective, a social science that does not include some account of this process of meaning construction and self-determination in human existence can hardly be considered a human science at all.

On the other hand, there are numerous reasons why phenomenological research is avoided by social scientists. Critics contend that phenomenological work cannot be empirically verified and is therefore antiscientific. Additionally, the practical relevance of largely descriptive phenomenological enquiry for the applied world of policy formulation is not always clear, compared, for instance, to correlational and variable-oriented research.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985005636

Introduction to Discrete Dislocation Statics and Dynamics

Raabe Dierk, in Computational Materials Engineering, 2007

8.6 Dislocation Reactions and Annihilation

In the preceding sections it was mainly long-range interactions between the dislocation segments that were addressed. However, strain hardening and dynamic recovery are essentially determined by short-range interactions, that is, by dislocation reactions and by annihilation, respectively.

Using a phenomenological approach that can be included in continuum-type simulations, one can differentiate between three major groups of short-range hardening dislocation reactions [FBZ80]: the strongest influence on strain hardening is exerted by sessile reaction products such as Lomer–Cottrell locks. The second strongest interaction type is the formation of mobile junctions. The weakest influence is naturally found for the case in which junctions are formed.

Two-dimensional dislocation dynamics simulations usually account for annihilation and the formation of sessile locks. Mobile junctions and the Peach–Koehler interaction occur naturally among parallel dislocations. The annihilation rule is straightforward. If two dislocations on identical glide systems but with opposite Burgers vectors approach more closely than a certain minimum allowed spacing, they spontaneously annihilate and are removed from the simulation. Current 2D simulations [RRG96] use different minimum distances in the direction of glide (danng≈20|b|)and climb (dannc≈5|b|), respectively [EM79] (Figure 8-8).

One possible interpretation of the basic phenomenological philosophy is that

FIGURE 8-8. Annihilation ellipse in 2D dislocation dynamics. It is constructed by the values for the spontaneous annihilation spacing of dislocations that approach by glide and by climb.

Lock formation takes place when two dislocations on different glide systems react to form a new dislocation with a resulting Burgers vector which is no longer a translation vector of an activated slip system. In the 2D simulation this process can be realized by the immobilization of dislocations on different glide systems when they approach each other too closely (Figure 8-9). The resulting stress fields of the sessile reaction products are usually approximated by a linear superposition of the displacement fields of the original dislocations before the reaction.

One possible interpretation of the basic phenomenological philosophy is that

FIGURE 8-9. Reaction ellipse in 2D dislocation dynamics. It is constructed by the values for the spontaneous reaction spacing of dislocations that approach by glide and by climb.

Dislocation reactions and the resulting products can also be included in 3D simulations. Due to the larger number of possible reactions, two aspects require special consideration, namely, the magnitude and sign of the latent heat that is associated with a particular reaction, and the kinematic properties and the stress field of the reaction product.

The first point addressed can be solved without using additional analytical equations. For investigating whether a particular reaction between two neighboring segments will take place or not, one subtracts the total elastic and core energy of all initial segments that participate in the reaction from that of the corresponding configuration after the reaction. If the latent heat is negative, the reaction takes place. Otherwise, the segments pass each other without reaction. In face-centered cubic materials 2 dislocations can undergo 24 different types of reactions. From this number only 12 entail sessile reaction products. Assuming simple configurations, that is, only a small number of reacting segments, the corresponding latent heat data can be included in the form of a reference table.

The short-range back-driving forces that arise from cutting processes are calculated from the corresponding increase in line energy. For either of the cutting defects, the increase in dislocation line amounts to the Burgers vector of the intersecting dislocation. Although this short-range interaction does not impose the same immediate hardening effect as a Lomer–Cottrell lock, it subsequently gives rise to the so-called jog drag effect, which is of the utmost relevance to the mobility of the dislocations affected.

The treatment of annihilation is also straightforward. If two segments have a spacing below the critical annihilation distance [EM79] the reaction takes place spontaneously. However, the subsequent reorganization of the dislocation segment vectors is not simple and must be treated with care.

The stress and mobility of glissile dislocation junctions can be simulated by using a simple superposition of the segments involved. Unfortunately, this technique does not adequately reflect the behavior of Lomer–Cottrell locks. Such sessile junctions must therefore be artificially rendered immobile.

What aspect of psychological functioning is of particular interest to phenomenological approach?

-sometimes assumes that immediate, conscious experience is all that matters. Phenomenological= focus on interior, experiential, and existential aspects of personality.

What Buddhist term refers to the idea that all things are in flux and impermanent?

Impermanence, called anicca (Pāli) or anitya (Sanskrit), appears extensively in the Pali Canon as one of the essential doctrines of Buddhism. The doctrine asserts that all of conditioned existence, without exception, is "transient, evanescent, inconstant".

What is the main goal of psychotherapy from a humanistic perspective?

Humanistic therapy adopts a holistic approach that focuses on free will, human potential, and self-discovery. It aims to help you develop a strong and healthy sense of self, explore your feelings, find meaning, and focus on your strengths.

Which theorist would most likely make the statement that above all else individuals are driven to fulfill all that meets their basic needs of survival quizlet?

they are basically good. Which theorist would MOST likely make the statement that above all else, individuals are driven to fulfill all that meets their basic needs of survival? Maslow.