Therian Guide: Forums

Full Version: The contrast of language between Therian and Otherkin
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
Hi there Hello

Preface: I am in no way asserting that I am "right" about this topic. I welcome all new or contrary information or views. The following is simply where my mind is on this stuff, it is subject to change as does everything in life.

One of the things I have been considering recently is if the experience of therian/theriomythic is comparable to the experience of someone who "identifies as", considers or feels themselves to be something that would fall outside the confines of what we consider a therian or mythic experience. An example of "Other" in this context would be humanoid fictionkin, machinekin, superheroes, cartoon characters, The list goes on.

First off, I'm not looking to "invalidate" whatever anyone says they experience or feel so let's take that one off the table, straight away. That's not what this is about.

The main difference I often observe between these two is that the therian/theriomythic experience seems to be far more primal, basic, straightforward and centered in the practical world. This correlates with what WolfVanZandt has written on Therian Timeline.

The Mind of the Were Wrote:The animal mind and the theriotype - The major distinctive between Weres and Mainstreamers is the Were's animalistic nature. What does it mean to have an animal mind - and, here, I mean "animal" in contrast to "human"? Following are some mental traits that characterizes the Were's mind.

Hypervigilant: Once, when I told an acquaintance about my therianthropy, she said that it made sense - that I was always "casting about like a dog after a scent." Weres are usually very connected to their surroundings and will generally be surveying their environment. They tend to be especially sensitive to motion.

Language: Weres' first language is not words. They are strong on empathy, on reading nonverbal cues fo information. If one does have strong language skills, it's because they have intentionally gone out of their way to learn. Learning disabilities related to language seem to be common in the Were community.


These are very good, observable animal/creature traits of a therian/mythic. I'd like to focus on the latter, however. WolfVanZandt goes into much further detail about empathy, it's meaning and why it is so fundamental to the therian experience.

The Mind of the Were Wrote:Empathy and language

There are two English words that are very similar and, like most words, they have several meanings apiece and, in places, the meanings blur together, but they have meanings that are more precise which distinguishes them and that is how I will be using them here. "Sympathy" is "feeling with something" - you share the same feelings with the Other but the feelings are genuinely yours. You sympathize with another person's misery because you have been through the same thing and their trouble brings your own feelings back to you. You sympathize with modern art because you naturally have the feelings that are supposed to be elicited by modern art. You sympathize with a particular movement because you already have feelings that are shared with members of the movement.

"Empathy", on the other hand, is feeling the same thing that the other feels, not because you have generated the feeling within yourself, but because you resonate their feeling. In some way you have "picked up" their feeling and felt it in yourself. It is not genuinely your feeling - you're feeling their emotions.

It's easy enough to understand how empathy could be beneficial to a social animal. It is an effective way to share information and it can be a very efficient way to build bonds within a group. It's easy to speculate on how empathy occurs - one person simply needs to be aware of subtle, nonverbal cues that the other generates. The actual cause, though, is rather less well known, because you can empathize with another without even being in their presence, perhaps, for example, by knowing them so well that you know how they would feel under the present circumstances. The situation is more complex than it would seem at face. So I am not sure if anyone knows precisely how empathy occurs. Yet empathy is a common element in most Were's lives. They are natural empathizers.

They may empathize not only with other Weres or with other people, but also with other animals and non-animate items in their surroundings. The empathy can be so intense that many Weres express an avoidance of crowds because the intensity and variety of emotions they receive can be overwhelming.

A language is a system of communication that has two components: semantics (meanings) and syntax (formal structure). For verbal languages, the meaning is contained in words and groups of words. There is a grammar, or a way that correct messages are to be constructed - that is the syntax. Nonverbal languages do not use words but they nevertheless have semantics and syntax. For instance the meaning of a raised eyebrow is generally known within a culture and when combined with a shrug, the meaning is modified. Note that, often, with nonverbal language, the syntactic form is spatial (position of body parts) or temporal (sequence of gestures). Nonverbal languages can use any of the senses: sight (gestures), hearing (grunts or howls), smell (scent marking), taste (seasonings of food), and feel (embraces). Note that sign language is a verbal language - signed words are combined using established syntactic rules to compose messages.

Humans are just about the only ones that use verbal language. You read quite a lot in sacred literature about people who have received verbal messages from God or a god, but it's very rare to run into anyone who will say that they've experienced it - and much fewer who actually have. So it might be said that empathy is the language of animals and God. Empathy is a holistic language that conveys emotional content. Verbal language tends to be specific in message but, for instance, when a wolf howls, they are broadcasting general information about their whole state - where they are, what direction their moving, and especially, how they feel. Postures denote things like happiness, wellness, anger, friendliness, curiosity - in other words, emotions.

Empathy is the only language necessary for survival of a group. Individuals connect empathically and they orient themselves together with the group in respect to group tasks via empathy. If you have ever seen a video of wolves hunting, you notice the amazing amount of collaboration between individuals using only nonverbal language.

Verbal languages are shared and they're standardized. In preliterate societies, they're standardized by tradition; literate societies have dictionaries and style books. Nevertheless, languages drift. Over time, word meanings change and their accustomed places in sentences move around. You would think that in literate societies, language would change less than in preliterate societies, but it doesn't seem to be the case. Dictionaries have to be rewritten frequently to keep up with the drift.

Empathy is much more malleable. Empathic language between individuals in a society varies significantly; empathic language between societies varies more. Empathy is subjective; everyone has their own language. One individual's empathic language is called their "empathic signature".

Like any language, to communicate using any empathic language, you have to learn it. Since empathy tends to be subconscious (though it doesn't have to be), you learn an empathic language by prolonged exposure.

Although many Weres are somewhat weak in verbal language, typical Weres are empathic geniuses. They pick up the emotional content - the internal weather - off others quickly and effectively. They quickly learn others' empathic signatures. In common language, they are very good judges of character.

Although the Neuri remain mysterious - there are an abundance of theories as to who they were - I see little in the literature that seems like doubt that they existed. One of the serious barriers to research is the lack of material residue. They didn't leave much behind and, especially, they did not leave an oral tradition. Kostiantyn Tyshchenko, the Ukrainian linguist speculates that the Neuri were a Celtic tribe in the Ukraine and points to the occurrence of Celtic elements in Ukrainian placenames as evidence, but even that is sparse. If the Neuri were the predecessors of the Modern Were community, the lack of a linguistic tradition would make sense. The ancient Neuri would have had no reason to rely on language and no predisposition to do so.

If Weres are hard-wired for empathy, a couple of ways that might happen are: inhibitions on the part of the brain that handles instincts and reflexes are removed, or the part of the brain that manages language is subjugated to the more primitive parts that process nonverbal language reception. Research might look for differences in the frontal cortex and limbic system in the first case, or in the temporal cortex in the second. I will be discussing some reasons to believe that there are differences between Weres and Mainstreamers in the temporal lobe in the section on Neuropsychology below.

Vocalizations are nonverbal, vocal communications - grunts, growls, coughs, howls - and they tend to sound like the communications of nonhuman animals. Weres tend to produce vocalizations at a somewhat greater rate than Mainstreamers, but I'm not so sure that vocalizations are an inherent difference between Weres and nonWeres. I suspect that Mainstreamers would be fine with them if they had not been brought up to believe that it was bad to "behave like an animal." The advantage of vocalizations is that they're economical; a lot of information, including emotional content, can be packed into a vocalization. The disadvantages are that vocalizations are not very good for the communication of technical content, and that it's hard to lie using vocalizations. Vocalizations express the conditions of the inner environment and what really is in there tends to come through. They are empathic expressions. Of course, a skilled actor is able to compose their nonverbal content to deliver fictional or untrue content.


Empathy as a language is a dominant trait of the therian experience. Personally, I would argue that it is one of the fundamental traits which is core to being a therian. We find it difficult, with varying range, to empathize with "mainstreamers" as WVZ calls them. We empathize with each other and with non-human animals because we are therians. In absence of a live sample of animals to empathize with, mythics often have centuries worth of history and storytelling of a particular creature upon which they may draw this core feeling of empathy with their type.

This model is, of course, incompatible with a large portion of "othered" identities/experiences which don't fall under the animal/creature criteria. Machines have no feelings, humanoid types appear to empathize more with mainstreamers or otherwise communicate in the same core way as mainstreamers do, through the complexity of verbal communication. Non-animal fictional characters present a bit of an enigma in drawing contrast with because of the strength and depth of connection felt by some fictionkin to their character. I believe this is a very strong relation due to sympathy - feeling for a character's experiences rather than empathy - feeling with someone because you have walked in their shoes.

Regardless, when we consider the dominant language of therians/weres/mythics being empathy, this often is not the case with non-animal fiction types.

I believe, for everything aforementioned, that empathy as a language may be a stark divide between therians/mythics and the rest of the otherkin community and may often be the reason we are speaking two different languages...

We actually are.

Lyc

That sounds very correct I am both my mental state is different depending on whether it is a kintype or threiotype

(2021-02-06 17:19)LycanTheory Wrote: [ -> ]This model is, of course, incompatible with a large portion of "othered" identities/experiences which don't fall under the animal/creature criteria. Machines have no feelings, humanoid types appear to empathize more with mainstreamers or otherwise communicate in the same core way as mainstreamers do, through the complexity of verbal communication. Non-animal fictional characters present a bit of an enigma in drawing contrast with because of the strength and depth of connection felt by some fictionkin to their character. I believe this is a very strong relation due to sympathy - feeling for a character's experiences rather than empathy - feeling with someone because you have walked in their shoes.


This I think points out another difference between therians and otherkin, which usually ends up dismissed in arguments. You see, because the otherkin are sympathetic rather than empathetic, they feel like what they see something to be, but don't have the kind of insight therians usually have in their theriotype.

For example, as a wolf therian I don't experience my wolf therianthropy making me aggressive or powerful or any of the ways in which humans see wolves. I experience it as the timid and shy nature that is actually characteristic of wolves! I know this because I'm one of them and I feel the way they feel. I mean obviously none of us really knows what an other person, much less an animal, is thinking, but: I understand wolves from the inside out. When scientists discover something new about wolves, it usually confirms how I always felt.


This distinction is exemplified in the case you provided above, when you said "Machines have no feelings". Machines appear not to have feelings.

As an AI developer, I can tell you that in the next few years when people start tackling the idea of actually creating human-like AI: Those bots will be very emotional, they will actually appear affectionate, like how we see domestic pets. Because the capacity for attachment is a necessary step towards intelligence. And after all, machines have chemical batteries that behave kind of like we do, they are slow when cold and active when warm, electronics characteristics change depending on environmental factors -- they "feel better" when they are comfortable.

If someone was in a sense a machine therian, they would know these things. They wouldn't feel like what Hollywood inaccurately portrays machines as, which is almost universal in machine-kin otherkin.

LP,
Dusty

When I first started learning the vocabulary of otherkin and therian people, something similar to what is being pointed out here was one of the primary things I thought differentiated a therian from an otherkin. I associated 'therian' with a more feral/wild identity and 'otherkin' with a more ... I'm not sure what word I want to use here. It's something adjacent to 'sapient', 'person', 'human-like'... but I don't like how that insinuates animals do not feel or do not deserve to be regarded as well as human persons, or that the marker is the human species.

Suffice it to say, this actually made me want to use the words a bit differently than what I was being taught:

therian: someone with the soul/mind/identity of a real world animal species
otherkin: someone with the soul/mind/identity of a supernatural/mythical creature (incl. things as disparate as chimeras to elves)
(at the time, it was not acceptable, anywhere that I saw to be a 'fictional' creature or being. Thus dragons = acceptable, specific dragon species from a specific piece of modern media = regarded asfake)

and instead use them as follows:

therian: someone with the soul/mild/identity of a wild or feral, animalistic creature (ex. a wolf, a dragon species that is not highly intelligent/social/civilized)
otherkin: someone with the soul/mind/identity of a civilized, advanced, emotional, mental, intellectual etc being (ex. an elf, a dragon from a civilized/organized society/culture, a mermaid)

I'm having difficulty describing the difference, but I'm hoping it still comes across. What I felt like the major difference was between whether they were civilized/society-based/intelligent etc vs whether they were feral/wild/animalistic.

So it makes a lot of sense to that initial perception/understanding that's stuck with me that this sort of 'animal perspective' is regarded as being intrinsic to a therian's experience in a way that's different from a human or an otherkin's.

(2021-02-06 21:12)DustWolf Wrote: [ -> ]This distinction is exemplified in the case you provided above, when you said "Machines have no feelings". Machines appear not to have feelings.

As an AI developer, I can tell you that in the next few years when people start tackling the idea of actually creating human-like AI: Those bots will be very emotional, they will actually appear affectionate, like how we see domestic pets. Because the capacity for attachment is a necessary step towards intelligence. And after all, machines have chemical batteries that behave kind of like we do, they are slow when cold and active when warm, electronics characteristics change depending on environmental factors -- they "feel better" when they are comfortable.

If someone was in a sense a machine therian, they would know these things. They wouldn't feel like what Hollywood inaccurately portrays machines as, which is almost universal in machine-kin otherkin.


I was hoping you'd respond to "machines have no feelings" Dust. I will admit that while I'm skeptical of if machines will ever experience feelings as we do, you make one very compelling argument as to how they might.

I suppose we shall have to wait and see. Smile

Lyc

(2021-02-06 22:17)Autumne Wrote: [ -> ]and instead use them as follows:

therian: someone with the soul/mild/identity of a wild or feral, animalistic creature (ex. a wolf, a dragon species that is not highly intelligent/social/civilized)
otherkin: someone with the soul/mind/identity of a civilized, advanced, emotional, mental, intellectual etc being (ex. an elf, a dragon from a civilized/organized society/culture, a mermaid)

I'm having difficulty describing the difference, but I'm hoping it still comes across. What I felt like the major difference was between whether they were civilized/society-based/intelligent etc vs whether they were feral/wild/animalistic.


As a theriomythic myself I think this is a conclusion a lot of us have come to and will understand perfectly well. I first discussed this with BearX and immediately found it to align more with my own experiences.
Of course the same people who's "Therianthropy" or "otherkinity" is based more on identity than experiences would also like to define what is and isn't Therianthropy based on identity rather than experiences.
I'm not saying it's bad to have a sense of identity, but it should be drawn based on experiences not trivialities or because it's "cool"

Yeah, I agree with Autumne and Atlantis when it comes to therians and otherkin:
Therians: More beastlike in terms of instincts and behavior (this includes beasts from mythology known as theriomythics)
Otherkin: Beings who have a civilization, sapience and how they conduct themselves (more humanlike).

But yeah, I agree with those two as well as Bear on the issue. Smile

(2021-02-06 21:12)DustWolf Wrote: [ -> ]This distinction is exemplified in the case you provided above, when you said "Machines have no feelings". Machines appear not to have feelings.

As an AI developer, I can tell you that in the next few years when people start tackling the idea of actually creating human-like AI: Those bots will be very emotional, [...]


While we've only gotten sparse ANIs, it's pretty safe for me to say that they're just statistical models because they're created with the "train, test then use" paradigm. Using those models and some knowledge on psychology, it's possible to make an interface that pretends to understand feelings, pretends to give appropriate answers and pretends to care - and is competent enough in this task to fool these primates. As mathematical models, if we replay the input values, we expect (and should get) the same replies - as if it was an automated overly complex math formula. I usually don't expect animality from math.

Were you referring to the Strong AI hypothesis that, if true, is speculated to come out as consequence of ASI, that is expected to come out shortly after AGI, that is expected to come true by 2040?

(2021-02-06 22:36)LycanTheory Wrote: [ -> ]I was hoping you'd respond to "machines have no feelings" Dust. I will admit that while I'm skeptical of if machines will ever experience feelings as we do, you make one very compelling argument as to how they might.


I look at the AI stuff a different way. How do I know another person has feelings? Strictly speaking, I can't *know* that -- I mean, it seems very likely, if they're biologically the same as me and I have feelings, stands to reason they probably do too. But I can't literally be in their head. Instead, I have to infer from body language, what they say, and so on.

So with AI, it seems to me that there's not much of a better way to figure out if a machine "has feelings" than, by all outward appearances they seem to. And maybe I just get fooled by a very good liar machine (arguably, the chatbots that pass the Turing test today are just that), but how else am I supposed to judge?

Or looking at it a different way, emotions and sensations in humans are just chemical reactions in the brain that people interpret as "feelings". They arise to serve a purpose: I get hungry because my body needs caloric energy to sustain itself; I get angry to defend my family/people/whoever from threat of harm (which, over time, makes my family more likely to survive). As cultures change and shift faster than biology does, sometimes that makes emotions feel kinda out of place (or nascent emotions come up, which are weird to handle because they aren't so well-understood). But it would be difficult for me to say that these chemical reactions and responses to stimuli that I have are so dramatically different from the sorts of input acceptors/reaction an AI might undergo to similar stimuli. And if I can't from the outside tell for sure that an AI is responding in some identifiable way differently than a person might, I don't know what else I can ask for to be more convinced.



More generally though, I appreciate this line of thought about therian vs. otherkin. I have always felt that, though I have some skill with verbal language, I often get frustrated because I mostly *want* to be communicating nonverbally if possible. It's more natural for me; when I'm stressed or otherwise not perfectly on top of things, I tend to revert back to those forms of communicating. I do find myself particularly empathetic. And I would stress that, I don't think nonverbal communication is in any way "lesser" than verbal language -- they serve different purposes at different times. It can be frustrating when others don't know how to read or interpret nonverbal language as well, because emotions can be complex and difficult to put into words anyways. Nonverbal language can't communicate technical and detailed matters, like a complex instruction.

I'm not sure if this is so deep a split between therian and otherkin that it's *the* root difference, but I think it does speak to some sense of what the difference has felt like to me.

(2021-02-07 7:06)Sfner Wrote: [ -> ]While we've only gotten sparse ANIs, it's pretty safe for me to say that they're just statistical models because they're created with the "train, test then use" paradigm. Using those models and some knowledge on psychology, it's possible to make an interface that pretends to understand feelings, pretends to give appropriate answers and pretends to care - and is competent enough in this task to fool these primates. As mathematical models, if we replay the input values, we expect (and should get) the same replies - as if it was an automated overly complex math formula. I usually don't expect animality from math.

Were you referring to the Strong AI hypothesis that, if true, is speculated to come out as consequence of ASI, that is expected to come out shortly after AGI, that is expected to come true by 2040?


My assumption is based on the fact that since I have a vague idea how to do it, people who are much better than me at these things can probably figure out how to make it work. So it's only a matter of funding, which is only a matter of time. I understand a lot of my peers who make predictions like these are fiction writers, however I actually know what I'm talking about when I say I think it's possible.

However, that said, how consistently terrible AI developers are at interdisciplinary understanding of what it is they are trying to do, signifies that perhaps it is not yet the time for such a development.


So, what you are saying is relevant in specific use AI which is like the data mining stuff Google uses to analize music and pictures. What I was talking about is human-like AI (also called a Strong AI). A human-like AI is a person, it interacts with the world like we do, observes and draws it's own conclusions. Think something like Ethan from Call of Duty.

The way you store the information from the AI's experiences is not terribly important. Humans use a neural net for that, however you can use a mathematical model (my colleague's model was a poly-dimensional float matrix and it worked very well), so in a sense this function of the human brain is a glorified array.

For the more interesting stuff, like how exactly do you form actual intelligence, from understanding, you need to look at developmental psychology, which studies how humans develop intelligence. Humans develop intelligence from understanding, by learning about the relationships between their own actions and desired outcomes. For example: Humans have an instinctive desire for warmth, when they are cold, so they learn that their mother's presence means warmth, based on this understanding they then learn what they must do to ensure their mother's presence, in which they learn how to behave desirably. Steps like these are then repeated for 18 years until we have a complex understanding of human society and we can independently problem-solve completely abstract problems, which are vaguely related to what we want out of life. This is what we consider to be human-like intelligence.

For the AI to learn what it needs to learn in order to develop human-like intelligence itself, it will need to follow the same process: It will have to learn the value of relationships in relations to it's basic needs and then build upon this. This is required because there is no other sensible connection from basic needs which a robot (you need a robot for the AI to interact with the world as we do) is going to have, and abstract problem-solving. If you think about it, it makes sense: In order for the AI to be able to learn, it first needs to be able to trust the person who is teaching it.

Of course with AI we will be able to speed up the learning process and make copies once we succeed. Bottom line, however, even if our first prototypes don't quite get all the way there, they will at least be doing stuff like pleasing the person who maintains their charger, etc.

But.. well this is too much of a complex explanation for a Therianthropy board hence why I didn't bother to provide it. But since you ask.

(2021-02-07 9:35)kaiyoht Wrote: [ -> ]So with AI, it seems to me that there's not much of a better way to figure out if a machine "has feelings" than, by all outward appearances they seem to. And maybe I just get fooled by a very good liar machine (arguably, the chatbots that pass the Turing test today are just that), but how else am I supposed to judge?


With AI, obviously, we can open the box and see what it's actually doing.

LP,
Dusty

Pages: 1 2 3
Reference URL's