Approved Alberta

SUMMARY - Phishing and Social Engineering Threats

Baker Duck
pondadmin
Posted Thu, 1 Jan 2026 - 10:28

A chief financial officer receives an urgent email from the company's CEO requesting an immediate wire transfer to close a confidential acquisition. The email comes from an address nearly identical to the CEO's, uses language patterns matching previous communications, and references a real deal the CFO knew was in progress. She authorizes the transfer. Twenty million dollars disappears to accounts controlled by criminals who spent months studying the company, its executives, and its communications before striking. A hospital employee receives a call from someone claiming to be from the IT department, explaining that a system upgrade requires verifying login credentials. The caller knows the employee's name, department, and manager, and the phone number appears to originate from internal systems. The employee provides credentials that give attackers access to patient records for thousands of people. A small business owner clicks a link in an invoice that appears to come from a regular supplier, downloading malware that encrypts the company's files and demands ransom. An elderly woman receives a call from someone claiming to be her grandson, voice cloned from social media videos, begging for money to get out of legal trouble while pleading with her not to tell his parents. She sends thousands of dollars before discovering the real grandson knew nothing about it. Social engineering attacks exploit not technical vulnerabilities but human ones: trust, helpfulness, fear, urgency, and the social instincts that normally serve us well. Whether individuals and organizations can effectively defend against attacks designed by professionals to exploit fundamental human psychology remains a question without easy answers.

The Case for Recognizing the Human Element as Primary Vulnerability

Advocates argue that social engineering represents the most significant cybersecurity threat precisely because it bypasses technical defenses by targeting the humans who operate within them, and that addressing this threat requires fundamentally different approaches than traditional security. From this view, the human element is not a problem to be engineered away but the persistent vulnerability that attackers will always exploit.

Technical security has improved while social engineering has flourished. Organizations invest millions in firewalls, intrusion detection, encryption, and endpoint protection. These investments force attackers to find alternative paths. Social engineering provides that path by targeting people who have authorized access that technical controls cannot deny. The strongest network security means nothing when an authorized user is manipulated into granting access, transferring funds, or revealing credentials.

Attacks have become extraordinarily sophisticated. Business email compromise that studies targets for months, voice cloning that impersonates family members, deepfakes that create convincing video of trusted figures, and AI-generated content that personalizes attacks at scale represent evolution far beyond crude phishing emails with obvious errors. Attacks that once could be recognized through misspellings and suspicious requests now exhibit professional quality that sophisticated victims cannot distinguish from legitimate communications.

The psychology being exploited is fundamental rather than fixable. Social engineering works because humans evolved to trust, to help, to respond to authority, to act under urgency, and to reciprocate. These are not flaws but features of human social behavior that serve essential functions. Attackers exploit the same psychological mechanisms that enable cooperation, community, and functional society. Training people to suppress these instincts conflicts with what makes humans effective social beings.

The scale of exposure continues growing. More communication channels, more digital interaction, more publicly available personal information, and more sophisticated attack tools mean that more people face more attacks more often. The attack surface expands as digital life expands. Every new communication platform, every new way of interacting, creates new opportunities for social engineering.

From this perspective, addressing social engineering requires: recognizing that human vulnerability cannot be eliminated through training alone; designing systems that do not depend on users making perfect security decisions; implementing verification processes that do not rely on easily spoofed communication channels; creating organizational cultures where questioning suspicious requests is encouraged rather than punished; accepting that some attacks will succeed and preparing to limit damage when they do; and understanding that attackers who study human psychology professionally will always find ways to exploit it.

The Case for Effective Human Defense

Others argue that while social engineering is serious threat, individuals and organizations can develop effective defenses through training, awareness, and procedures that substantially reduce successful attacks. From this view, human vulnerability is real but addressable.

Training demonstrably improves resistance to attacks. Organizations that implement comprehensive security awareness programs see measurable reductions in successful phishing. Employees who receive regular training, realistic simulations, and reinforcement recognize attacks that untrained employees fall for. The claim that training does not work is contradicted by evidence that trained populations perform better than untrained ones.

Most attacks are not sophisticated. While headlines feature elaborate business email compromise and AI-generated voice cloning, the majority of attacks remain relatively crude. Phishing emails with generic messages, obvious scams with recognizable patterns, and attacks that training can address constitute most of what individuals and organizations face. Sophisticated attacks require significant investment and are reserved for high-value targets. Most people face attacks that awareness and skepticism can defeat.

Verification procedures prevent successful exploitation. Organizations that require callback verification for financial transactions, that mandate multi-person authorization for significant requests, and that establish out-of-band confirmation for sensitive actions prevent attacks even when initial social engineering succeeds. The employee who receives a convincing phishing email but follows procedure to verify through separate channels defeats the attack regardless of how convincing the initial contact was.

Psychological exploitation can be countered with psychological awareness. Understanding that attackers create urgency, that they exploit authority, and that they manufacture trust enables recognition of manipulation even when specific tactics are novel. Meta-awareness of how social engineering works provides defense against attacks whose specific forms cannot be anticipated.

Cultural factors determine organizational vulnerability. Organizations where employees feel empowered to question suspicious requests, where verification delays are accepted, and where security concerns are welcomed resist social engineering better than organizations where compliance is prioritized over questioning. Culture is modifiable and culture affects outcomes.

From this perspective, effective defense against social engineering requires: sustained training that develops practical recognition skills; verification procedures that defeat attacks even when initial contact succeeds; cultural development that encourages healthy skepticism; recognition that most attacks remain detectable with appropriate awareness; and commitment to continuous improvement as attacks evolve.

The Phishing Evolution

Phishing has evolved from mass emails with obvious errors to sophisticated targeted attacks that challenge even security-aware recipients.

From one view, phishing evolution has outpaced defense. Early phishing could be recognized through poor grammar, generic greetings, and suspicious sender addresses. Modern spear phishing uses correct language, personal details, and spoofed addresses that appear legitimate. The advice to look for phishing indicators may not apply to attacks that exhibit none of the traditional warning signs.

From another view, fundamental phishing patterns remain recognizable. Attacks still create urgency, still request action, still direct recipients to take steps that enable exploitation. The indicators have evolved but the underlying patterns persist. Training that focuses on behavioral patterns rather than specific technical indicators remains effective.

Whether phishing evolution has defeated traditional recognition or whether pattern-based awareness remains effective shapes training approaches.

The Business Email Compromise Sophistication

Business email compromise attacks target organizations through carefully researched approaches that exploit business relationships, payment processes, and organizational hierarchies.

From one perspective, BEC represents qualitative advance that traditional defenses cannot address. Attackers who spend months researching targets, who monitor email traffic, who understand organizational processes, and who strike at precisely calibrated moments cannot be defeated through generic awareness training. BEC requires specific defenses tailored to the sophisticated nature of the threat.

From another perspective, BEC attacks succeed primarily when organizations lack appropriate verification procedures. The attack vector may be sophisticated but the defense is straightforward: verify payment requests through separate channels, require multiple authorizations for significant transactions, and establish procedures that prevent single points of compromise. Process discipline defeats sophisticated attacks.

Whether BEC requires fundamentally new approaches or whether procedural discipline provides adequate defense shapes organizational response.

The Voice and Video Deepfake Frontier

AI-generated voice cloning and video deepfakes enable social engineering that impersonates specific individuals with increasing fidelity.

From one view, deepfakes represent transformative threat. Voice calls that sound exactly like family members, video conferences that appear to show trusted executives, and audio messages indistinguishable from genuine recordings defeat authentication approaches that relied on recognizing voices and faces. The assumption that voices and appearances verify identity no longer holds.

From another view, deepfake threats may be overstated for most contexts. Creating convincing deepfakes requires source material and effort that limits application. Real-time deepfakes for interactive calls remain imperfect. Verification through questions that only the genuine person could answer provides defense even against convincing impersonation.

Whether deepfakes fundamentally transform social engineering or whether practical limitations constrain their impact shapes defensive preparation.

The AI-Powered Attack Scaling

Artificial intelligence enables social engineering at scale through automated generation of personalized phishing content, chatbots that conduct social engineering conversations, and systems that adapt attacks based on target responses.

From one perspective, AI scaling transforms the threat landscape. Attackers who once had to choose between personalized attacks on few targets or generic attacks on many can now deliver personalized attacks at scale. Every potential victim receives content crafted specifically for them. The economics of social engineering have fundamentally changed.

From another perspective, AI-powered defense can match AI-powered attack. Systems that detect AI-generated content, that identify social engineering patterns, and that filter attacks before they reach targets can address scaled attacks. The AI arms race applies to social engineering as to other cybersecurity domains.

Whether AI-scaled attacks will overwhelm defenses or whether AI-powered defense can keep pace shapes expectations for the threat environment.

The Organizational Culture Factor

Organizational culture significantly affects social engineering vulnerability. Cultures that encourage compliance without questioning may be more vulnerable than cultures that welcome healthy skepticism.

From one view, culture is primary determinant of organizational vulnerability. Technical controls and training matter less than whether employees feel empowered to question suspicious requests, whether verification delays are accepted, and whether security concerns are welcomed. Culture change, while difficult, provides more durable protection than technical measures.

From another view, culture is difficult to change and may not compensate for inadequate controls. Organizations cannot simply decide to have security-conscious cultures. Cultures develop over years through countless interactions and are not easily redirected. Technical controls and procedures provide protection regardless of culture.

Whether culture should be primary focus for social engineering defense or whether controls matter more shapes organizational investment.

The Training Effectiveness Debate

Security awareness training is standard defense against social engineering, but its effectiveness is contested.

From one perspective, training is essential and effective. Studies demonstrate that trained employees recognize attacks that untrained employees fall for. Simulated phishing exercises show improvement over time. Organizations that invest in quality training see measurable risk reduction. The claim that training does not work reflects poor training rather than inherent training limitations.

From another perspective, training effectiveness has limits. Employees who pass simulations immediately after training may fail months later when knowledge has faded. Training that teaches recognition of specific attack types may not transfer to novel attacks. The behavior change that training seeks is difficult to achieve and maintain. Training may provide false confidence while actual vulnerability persists.

Whether training can effectively reduce social engineering vulnerability or whether its benefits are limited and temporary shapes program investment.

The Phishing Simulation Controversy

Organizations commonly test employees through simulated phishing attacks. Whether simulations improve security or create other problems is debated.

From one view, simulations provide realistic practice. Employees who experience simulated attacks in work context develop recognition capabilities that classroom training cannot provide. Simulations identify vulnerable employees for additional training. Measurement of simulation results enables program assessment.

From another view, simulations may damage trust and culture. Employees who feel tricked by their employer may become resentful. Punitive approaches to simulation failures create fear rather than engagement. The adversarial dynamic between security teams and employees that simulations create may harm culture more than it helps security.

Whether simulations improve security or undermine the trust that security culture requires shapes program design.

The Verification Process Protection

Verification procedures that require confirmation through separate channels can defeat social engineering even when initial contact succeeds.

From one perspective, verification is most reliable defense. The employee who receives convincing phishing email but follows procedure to verify through phone call defeats the attack. The finance team that requires callback confirmation for wire transfers prevents fraudulent transfers regardless of email sophistication. Verification does not depend on employees recognizing attacks.

From another perspective, verification creates friction that may not be sustainable. Every transaction verified through separate channels slows operations. Verification fatigue may lead to shortcuts. Attackers who understand verification procedures may find ways to defeat them. Verification provides protection but is not frictionless solution.

Whether verification procedures provide reliable defense or whether operational friction limits sustainability shapes process design.

The Authority Exploitation Pattern

Social engineering frequently exploits authority relationships, with attackers impersonating executives, IT departments, government agencies, or other figures whose requests typically command compliance.

From one view, authority exploitation is particularly difficult to counter. Employees trained to comply with management requests face conflict when asked to question messages appearing to come from executives. Organizational hierarchies that punish questioning create vulnerability that attackers exploit. Defending against authority-based attacks may require cultural changes that conflict with organizational structure.

From another view, specific protocols can address authority exploitation without requiring cultural transformation. Verification requirements that apply regardless of apparent sender, out-of-band confirmation for sensitive requests, and clear policies that authorize questioning suspicious messages provide protection within existing structures.

Whether authority exploitation requires cultural change or whether procedural responses suffice shapes organizational approach.

The Urgency and Fear Manipulation

Social engineering creates urgency and fear to bypass careful consideration. Messages that demand immediate action, threaten consequences, or create panic prevent the reflection that might identify attacks.

From one view, urgency manipulation exploits fundamental psychology that cannot be trained away. Humans evolved to respond quickly to threats. Creating urgency triggers responses that bypass rational evaluation. Training cannot override evolutionary programming.

From another view, awareness of urgency manipulation provides defense. Individuals who recognize that legitimate requests rarely require immediate response, that urgency itself is warning sign, and that taking time to verify is appropriate can resist urgency pressure. The manipulation works when victims do not recognize it as manipulation.

Whether urgency exploitation can be countered through awareness or whether it exploits psychology too fundamental to override shapes defensive expectations.

The Personal Information Weaponization

Attackers use personal information harvested from data breaches, social media, and public sources to make social engineering more convincing.

From one perspective, information weaponization makes attacks increasingly difficult to recognize. Attackers who know your name, your company, your colleagues, your recent activities, and your personal circumstances create contacts that appear legitimate because they demonstrate knowledge that only legitimate contacts should have. The traditional advice to be suspicious of contacts from unknown sources does not apply when attackers present themselves as known.

From another perspective, information weaponization can be limited through reducing available information. Individuals who limit social media exposure, organizations that protect employee information, and practices that reduce public information availability reduce the raw material attackers use. Information hygiene provides some protection.

Whether information weaponization has made attacks unrecognizable or whether information hygiene provides meaningful defense shapes personal security practices.

The Targeted Versus Mass Attack Distinction

Social engineering ranges from mass untargeted phishing to highly targeted attacks against specific individuals or organizations.

From one view, the distinction matters for defense. Mass attacks use generic approaches that awareness training addresses. Targeted attacks use customized approaches that generic training may not prepare for. Different threat levels require different defensive investments.

From another view, the distinction is blurring. AI enables personalized attacks at scale. What was once dichotomy between mass generic and targeted personalized becomes spectrum. Defenses must address attacks across the spectrum rather than preparing for distinct categories.

Whether targeted and mass attacks require different defenses or whether the convergence of approaches requires unified defense shapes program design.

The Insider Threat Intersection

Social engineering may target insiders who have legitimate access, converting authorized users into unwitting accomplices or recruiting disgruntled employees for intentional compromise.

From one perspective, insider threats created through social engineering are particularly dangerous because they operate within authorized access. Technical controls that distinguish authorized from unauthorized access cannot distinguish legitimate use from compromised insider misuse.

From another perspective, insider threats through social engineering face the same defenses as external social engineering. The employee manipulated into providing access or taking action can be protected through the same training, verification, and cultural approaches that address other social engineering.

Whether insider-targeted social engineering requires distinct approaches or whether general social engineering defenses address it shapes program scope.

The Third-Party and Supply Chain Vector

Social engineering may target third parties and supply chain relationships to gain access to ultimate targets.

From one view, supply chain social engineering extends vulnerability beyond organizational boundaries. Organizations cannot control third-party security practices. Vendors and partners who fall for social engineering become vectors for attacks on organizations they serve. The attack surface includes everyone connected to the organization.

From another view, third-party risk can be managed through vendor assessment, access limitation, and contractual requirements. Organizations can require security practices from vendors, limit vendor access to necessary systems, and monitor for compromise through vendor connections.

Whether supply chain social engineering creates unmanageable vulnerability or whether vendor management addresses it shapes third-party risk approach.

The Reporting and Response Culture

How organizations respond when employees fall for social engineering affects both immediate damage and future vulnerability.

From one perspective, blame-free reporting culture is essential. Employees who fear punishment will hide incidents, delaying response and preventing organizational learning. Quick reporting that enables rapid response limits damage more than punishment that encourages concealment. Organizations should celebrate reporting rather than punishing failures.

From another perspective, some accountability is necessary. Employees who repeatedly fall for attacks despite training may require consequences. Purely blame-free culture may not create sufficient incentive for vigilance. Balance between encouraging reporting and maintaining standards is necessary.

Whether response to social engineering incidents should be blame-free or whether accountability has role shapes organizational culture.

The Consumer and Individual Targeting

Social engineering targets individuals through romance scams, tech support fraud, government impersonation, family emergency scams, and countless other approaches designed for personal rather than organizational exploitation.

From one view, individual targeting is growing more sophisticated and damaging. Scams that once were recognizable have evolved to defeat casual skepticism. AI-generated voice cloning that impersonates family members, romance scams that build relationships over months before requesting money, and tech support fraud that gains remote access to computers cause devastating harm to individuals, particularly elderly and vulnerable populations.

From another view, individual defense remains possible through awareness and verification. Understanding that scams exist, that urgency should trigger skepticism, and that verification through known channels is appropriate provides protection. Family communication about scam awareness, particularly with elderly relatives, reduces vulnerability.

Whether individuals can effectively defend against sophisticated consumer-targeted social engineering shapes personal security advice.

The Elderly and Vulnerable Population Targeting

Scammers disproportionately target elderly individuals who may be more trusting, less familiar with technology, and more isolated.

From one perspective, elderly targeting requires special protective approaches. Technology that detects scam calls, family involvement in financial decisions, and banking protections that flag unusual transactions can protect those who may not protect themselves. Society has obligation to protect vulnerable populations from predatory scams.

From another perspective, protective approaches that restrict autonomy may be paternalistic. Elderly individuals deserve ability to make their own decisions including financial ones. Protection that treats elderly people as incapable of managing their own affairs may do harm of its own.

How to protect elderly and vulnerable populations from social engineering while respecting autonomy shapes family dynamics and policy.

The Financial Institution Role

Financial institutions occupy position where they can detect and prevent fraud that social engineering enables.

From one view, financial institutions should bear greater responsibility for preventing fraudulently induced transactions. Banks that process wire transfers without adequate verification, that allow unauthorized transactions, and that fail to detect obvious fraud patterns enable harm that customers alone cannot prevent.

From another view, customers bear some responsibility for authorizing transactions. Financial institutions cannot distinguish between legitimate customer authorization and authorization obtained through manipulation. Placing full responsibility on institutions would create moral hazard where customers have no incentive for vigilance.

Whether financial institutions or customers should bear primary responsibility for social engineering losses shapes liability frameworks and customer protection.

The Technology Platform Responsibility

Communication platforms, email providers, and social media companies serve as channels through which social engineering reaches targets.

From one perspective, platforms should do more to prevent social engineering on their services. Email filtering that blocks phishing, detection of impersonation accounts, and warnings about suspicious content could reduce attacks reaching users. Platforms that profit from communication should bear responsibility for security of that communication.

From another perspective, platforms cannot effectively filter content without overblocking legitimate communications. Social engineering detection at platform level will always be imperfect. Users must remain primary defense regardless of platform efforts.

Whether platforms should bear greater responsibility for social engineering prevention shapes platform regulation.

The Law Enforcement Challenge

Law enforcement faces challenges investigating and prosecuting social engineering crimes that often cross jurisdictions and involve sophisticated concealment.

From one view, law enforcement is inadequately resourced and positioned to address social engineering. International attackers operating from jurisdictions that will not prosecute them are beyond practical enforcement reach. Individual victims file reports that go uninvestigated. The law enforcement response is disproportionate to the harm.

From another view, law enforcement has achieved significant successes against social engineering operations. International cooperation has enabled prosecution of transnational fraud networks. High-profile cases create deterrence. The enforcement challenge is difficult but not hopeless.

Whether law enforcement can meaningfully address social engineering or whether other approaches must provide primary response shapes resource allocation.

The Insurance and Recovery

Insurance products and recovery services offer to cover losses and assist victims of social engineering. Their value is contested.

From one perspective, insurance and recovery services provide meaningful protection. Coverage for fraud losses, assistance navigating recovery, and professional support for victims addresses harm that individuals and small organizations cannot absorb alone.

From another perspective, insurance and recovery services may provide limited actual benefit while creating false confidence. Policies have exclusions that may not cover specific losses. Recovery services may accomplish little that informed individuals could not do themselves.

Whether insurance and recovery services provide meaningful protection or exploit fear shapes consumer protection.

The Measurement and Assessment

Measuring organizational vulnerability to social engineering and assessing program effectiveness requires approaches that may not provide reliable signals.

From one view, measurement enables improvement. Phishing simulation click rates, training completion, incident rates, and other metrics provide visibility into program effectiveness. Without measurement, programs cannot demonstrate value or identify gaps.

From another view, common metrics may not capture actual vulnerability. Low simulation click rates may reflect test design rather than organizational resilience. Training completion does not guarantee behavior change. The appearance of measurement may substitute for actual security understanding.

Whether social engineering vulnerability can be meaningfully measured shapes program assessment.

The Emerging Threat Evolution

Social engineering techniques continue evolving as attackers adapt to defenses and new technologies enable new approaches.

From one view, emerging threats will continue outpacing defenses. AI-generated content, deepfake technology, and techniques not yet developed will create attacks that current defenses cannot address. The arms race will continue favoring attackers who need only find new approaches rather than defending against all possible approaches.

From another view, fundamental social engineering patterns persist despite tactical evolution. Training that addresses psychological patterns rather than specific techniques remains effective regardless of how attacks evolve. The human vulnerabilities being exploited do not change even as exploitation methods do.

Whether emerging threats will defeat current defenses or whether fundamental defensive approaches remain effective shapes security investment.

The Canadian Context

Canadians face social engineering threats similar to those affecting other developed nations, with particular prevalence of CRA impersonation scams, Canada Post delivery fraud, and attacks exploiting Canadian institutions and contexts.

The Canadian Anti-Fraud Centre collects reports and provides resources, though many victims do not report and investigation resources are limited. Canadian financial institutions have implemented fraud detection, though coverage and effectiveness vary.

From one perspective, Canada should strengthen consumer protection, financial institution responsibility, and enforcement capacity to address social engineering.

From another perspective, existing frameworks provide adequate foundation, and focus should be on individual awareness and organizational security practices.

How Canada addresses social engineering shapes protection for Canadians.

The Fundamental Human Vulnerability

Social engineering exploits psychological tendencies that serve important functions: trust enables cooperation, helpfulness enables community, response to authority enables organization, and action under urgency enables survival. These are not bugs but features of human psychology.

From one view, this means social engineering cannot be eliminated. Training people to suppress trust, helpfulness, and social responsiveness would harm the social functioning these traits enable. Defenses must work around human psychology rather than trying to change it.

From another view, meta-awareness provides protection without suppressing beneficial traits. Understanding that these traits can be exploited enables recognizing exploitation when it occurs. People can maintain trust, helpfulness, and responsiveness while being alert to manipulation. Awareness does not require becoming antisocial.

Whether human psychological vulnerability can be addressed through awareness or whether it represents irreducible attack surface shapes defensive expectations.

The Organizational Versus Individual Framing

Social engineering can be framed as organizational security problem requiring organizational solutions or individual behavior problem requiring individual vigilance.

From one view, organizational framing is appropriate. Organizations create the contexts where social engineering occurs, benefit when attacks are prevented, and have resources to implement defenses that individuals lack. Placing responsibility on individuals diverts attention from organizational failures.

From another view, individual behavior ultimately determines outcomes. No organizational control can prevent an employee from clicking a link or revealing credentials. Individual vigilance is final line of defense regardless of organizational measures. Individual responsibility is not blame-shifting but recognition of where defense ultimately occurs.

Whether social engineering is primarily organizational or individual problem shapes responsibility and investment.

The Prevention Versus Resilience Balance

Resources can be invested in preventing social engineering from succeeding or in limiting damage when it does succeed.

From one perspective, prevention should be priority. Every successful attack prevented avoids harm that resilience can only partially address. Investment in training, verification, and detection provides better return than investment in damage limitation.

From another perspective, perfect prevention is impossible and resilience ensures acceptable outcomes when prevention fails. Segmented access that limits what compromised credentials can reach, monitoring that detects exploitation quickly, and response capabilities that contain damage provide protection that prevention alone cannot.

Whether prevention or resilience should receive priority investment shapes security resource allocation.

The Question

If social engineering exploits fundamental human psychology, including trust, helpfulness, response to authority, and action under urgency, that serves essential social functions and cannot be simply trained away, can individuals and organizations develop effective defenses against attacks designed by professionals to exploit these traits, or does the human element represent permanent vulnerability that no amount of training, technology, or procedure can eliminate? When attacks have evolved from crude phishing with obvious errors to sophisticated business email compromise developed through months of research, when AI enables personalized attacks at scale, and when deepfakes can impersonate voices and faces of trusted individuals, do traditional defenses based on recognition and verification remain effective, or has social engineering sophistication exceeded what awareness and procedure can address? And if the same traits that make people vulnerable to social engineering also make them effective colleagues, family members, and community members, should defense focus on changing human behavior that serves important functions, designing systems that do not depend on humans making perfect security decisions, or accepting that some level of successful social engineering is inevitable cost of maintaining the social trust that functional society requires?

--
Consensus
Calculating...
0
perspectives
views
Constitutional Divergence Analysis
Loading CDA scores...
Perspectives 0