Data Engineering on Air

The Data Engineering hub  is focused on bringing together information, experts, organizations, policy makers, and the public to LEARN more about a topic, DISCUSS relevant issues, and COLLABORATE on enhancing research-driven DE knowledge and addressing DE challenges ….  …. where onAir members control where and how their content and conversations are shared free from paywalls, algorithmic feeds, or intrusive ads.

The onAir Knowledge Network is a human-curated, AI-assisted network of hub websites where people share and evolve knowledge on topics of their interest. 

This About the Data Engineering onAir 2 minute video is a good summary of DE hub mission and user experience. 

If you or your organization would like to curate a post within this hub (e.g. a profile post on your organization), contact matthew.kovacev@onair.cc.

To become an onAir member  of this hub, fill in this short form. It’s free!

Source: Other

OnAir Post: Data Engineering on Air

Kirk Borne

Dr. Kirk Borne is the Chief Science Officer at DataPrime, Inc.  He is a sought-after global speaker on topics ranging from data mining, data management, big data analytics, data science, machine learning, artificial intelligence, internet of things, data-driven decision-making, modeling & simulation of dynamic systems to emerging technologies, future of work, education, and science.

Prior to this role, he was the Principal Data Scientist, Executive Advisor and the first Data Science Fellow at Booz Allen.

Kirk is also Founder and Owner, Data Leadership Group LLC.

OnAir Post: Kirk Borne

DE Processes Overview

Data engineering processes involve the design, construction, and maintenance of systems that handle the lifecycle of data, from its collection and storage to its transformation and delivery for analysis and decision-making.

These processes are crucial for ensuring data is accessible, reliable, and usable by other teams within an organization, like data scientists and analysts.

In essence, data engineering processes are the foundation for leveraging data within an organization. By building robust and efficient data systems, data engineers enable other teams to derive valuable insights and make data-driven decisions.

Source: Gemini AI Overview

OnAir Post: DE Processes Overview

DE Use Cases Overview

Data engineering encompasses a wide range of use cases, broadly categorized by the need to collect, process, and prepare data for various applications. Key areas include real-time analytics, customer relationship management, fraud detection, and supporting machine learning models.

Data engineering also plays a crucial role in areas like financial services, manufacturing, and healthcare, optimizing operations, improving decision-making, and enabling real-time monitoring.

The field is constantly evolving, with new applications emerging as data volumes and complexity continue to grow.

Source: Gemini AI Overview

OnAir Post: DE Use Cases Overview

DE Tools Overview

Data engineers utilize a diverse array of tools to manage and process data. These include programming languages like Python and SQL, data warehousing solutions like Snowflake and Amazon Redshift, and distributed computing frameworks like Apache Spark.

Other essential tools include Apache Kafka, ETL tools, and workflow orchestration platforms like Apache Airflow.

Source: Gemini AI Overview

OnAir Post: DE Tools Overview

Virginia Tech Academy of Data Science

Disclaimer: The content in this post is from the Academy of Data Science website WITHOUT ANY EDITS. Virginia Tech is a public university in Southwestern Virginia, USA.

Virginia Tech’s Academy of Data Science was launched in 2020, in part to help meet the growing demand for workers who possess the skills to analyze data. Data science — a transdisciplinary field that draws upon the theories, methods, and concepts of statistics, mathematics, computer science, and information science to extract knowledge and insight from data — not only impacts all branches of science, but other fields as well.

The Academy of Data Science will focus on the development of methods, techniques, and tools for extracting knowledge and insight from data to further science. In doing so, it will elevate data science as a scientific discipline of its own, as well as bolstering the integration of data science into all scientific fields.  Additionally, the Academy will serve as the connective fabric between the College of Science and other Virginia Tech colleges and institutes as they collaborate to develop new data science methodologies and applications of data science in scientific disciplines.

Tom Woteki, a three-time Virginia Tech alum with a Ph.D. in statistics, was named the founding director of the Academy of Data Science. He also heads the part-time data analysis and applied statistics master’s degree program in the greater Washington, D.C., metro area.

Source: VT Website

OnAir Post: Virginia Tech Academy of Data Science

DAEN – Data Analytics Engineering @GMU

Disclaimer: The content in this post is from the DAEN website WITHOUT ANY EDITS. George Mason University is a public university in Northern Virginia, USA.

George Mason University’s data analytics engineering programs at Volgenau prepare students for their future careers in a growing discipline.

All of our programs are taught by our industry-leading faculty members across schools and colleges in Mason, giving our students the ability to see the numerous possibilities a data-driven degree can offer.

  • Master of Science –– The MS in data analytics engineering is a multidisciplinary degree program at the Volgenau School of Engineering. It provides students with an understanding of the technologies and methodologies necessary for data-driven decision-making.
  • Certificate –– The graduate certificate in data analytics engineering gives students a foundation of basic data analytics and data science principles.
  • Master of Science Online –– The online MS program gives students the flexibility to earn an advanced degree and expand their knowledge in data analytics in an asynchronous format.
  • Certificate Online –– The online graduate certificate in data analytics engineering gives students a foundation of basic data analytics and data science principles with the flexibility of an online asynchronous format.

Data analytics engineering is an expanding field. Therefore, all our programs instruct students on current and innovative tools and prepare them to be adaptable to the future of the field.

 

Source: GMU

OnAir Post: DAEN – Data Analytics Engineering @GMU

  • Fall 2025 News Fall 2025 News

  • Data Engineering News – Summer 2025 Data Engineering News – Summer 2025

    i
    Feature Post: AI and Data Engineering

    The Featured Post for this month is on AI and Data Engineering.

    Data engineering and AI are deeply intertwined. AI relies heavily on data, and data engineering provides the infrastructure and pipelines necessary to make that data accessible, clean, and usable for AI models. In turn, AI is starting to automate and enhance data engineering tasks, creating a symbiotic relationship.

    The relationship between data engineering and AI is increasingly symbiotic. Data engineers are the backbone of AI, while AI is becoming a powerful tool for data engineers to enhance their work, improve efficiency, and unlock new possibilities.

    • Throughout the week, we will be adding to this post articles, images, livestreams, and videos about the latest data engineering developments (select the News tab).
    • You can also participate in discussions in all Data Engineering onAir posts as well as share your top news items and posts (for onAir members – it’s free to join).
    Data Engineering Weekly #229
    Data Engineering Weekly, Ananth PackkilduraiJuly 20, 2025

    Sebastian Raschka: From DeepSeek-V3 to Kimi K2: A Look At Modern LLM Architecture Design

    This article examines the structural changes and architectural developments in modern Large Language Models (LLMs), such as DeepSeek-V3, OLMo 2, Gemma 3, and Llama 4, rather than focusing on benchmark performance or training algorithms. The author details key innovations, including Multi-Head Latent Attention (MLA), Mixture-of-Experts (MoE), various normalization layer placements (Pre-Norm, Post-Norm, and QK-Norm), and sliding window attention, which primarily aim to enhance computational efficiency, memory usage, and training stability.

    Paul Levchuk: The Metric Tree Trap

    The article defines a Metric Tree as a hierarchical decomposition of a top-level business goal into measurable drivers, acknowledging its value primarily for visualisation and team alignment of key performance indicators. However, the author critically argues that Metric Trees are unreliable for making robust decisions, as they frequently obscure crucial operational insights due to issues such as contradictory metric definitions, inconsistent granularity, hidden trade-offs, and confounding factors, making the effective identification of key drivers, root cause analysis, and accurate prioritization challenging. To mitigate these “traps” and ensure reliable conclusions, the author strongly advises pairing Metric Tree insights with rigorous root cause analysis, scenario testing, and a thorough cost-benefit assessment.

    How to build a billion-dollar AI company (it’s not what you think)
    Metatrends, Peter DiamandisJuly 8, 2025

    What makes an AI startup succeed isn’t the tech—it’s the team. It turns out, the most successful companies aren’t always the ones with the best product. Instead, they’re the ones with the best teams. Teams who trust each other and can adapt rapidly. Dave Blundin (Managing Partner of AI venture fund, Link-XPV) quotes legendary VC Fred Wilson: “If they pass the test of being best friends and technical co-founders, I invest—even if the idea is stupid—because their idea will change, but the people won’t.”

    1. The Number One Failure Mode

    When founders aren’t deeply aligned, the first pivot often kills the company. That’s the number one failure mode for startups: somebody bails because they can’t handle the uncertainty. But they’ll succeed in the end if they commit and stick with it, together.

    I remember the early days of Singularity University’s Graduate Summer Studies Program (GSP). We put a hundred alpha males and alpha females into a room and said, “Find other co-founders amongst yourselves and start a company based on exponential tech.” This was very different from Y Combinator (started roughly at the same time). In Y Combinator, the teams entered with preexisting relationships. In retrospect, the failure mode at SU’s incubator was predictable. There was no glue, no shared passion as these teams came together over the course of a few weeks, versus a few years.

    In fact, the companies that did succeed in the GSP were the ones where founders became friends and stuck together through every pivot.

    2. You Can’t Buy Founder Chemistry

    People ask, “Why should they be best friends? Why should they have a relationship spanning years?”

    Here’s why: Meta, Google, and OpenAI are raiding companies and stripping out talent. If you’ve started a company with some stranger and it’s been six months, when someone offers you a huge signing bonus, you’re going to leave.

    But if you’ve started a company with your best friends—people with whom you have real history—you’re not going to abandon them. That’s critical.

    Look at Zuck’s AI talent poaching spree. Meta tried to buy Safe Super Intelligence (SSI) for billions but got rebuffed. Now they’re throwing $100 million signing bonuses at executives like Daniel Gross and Nat Friedman. In the AI talent wars, you can throw money at researchers, but you can’t recreate the magic of bonded co-founders building late into the night.

    3. Ideas Pivot, But Relationships Last

    Fred Wilson, the legendary VC from Union Square Ventures, has a simple philosophy: he backs teams of best friends who are technical co-founders. His logic is: even if the idea is terrible, he’ll invest because great teams pivot fast, but you can’t change relationships overnight.

    Take Israel’s startup ecosystem: five times higher success rate per capita than anywhere else. Why? The military service that predates most of the startups has created lifelong bonds. Marching through the desert, suffering together, appreciating college more, and then starting companies while still in school. They’re older but more bonded.

    4. A Test for Strong Relationships

    Even if you haven’t gone to school with your co-founders, there are ways to test compatibility.

    Here’s mine: People you’d sit next to on a 12-hour flight in coach. How would you feel when you get off that plane? Are you exhausted or energized? That’s how it’ll feel doing a startup together. If you’re with the right people—those you click with—you’ll come off energized after talking about everything under the sun for 10-12 hours.

     

    The future belongs to those bonded teams who can pivot together, stick together, and build together.

    Total Information Awareness, Rebooted
    Substack, William A. FinneganJune 2, 2025

    The dots were always everywhere, now they can finally connect them.

    I’ve gotten a fair number of questions about the Planatir news, so let me lay out a few key things to keep in mind.

    First:

    The U.S. government has had this kind of capability—the ability to know anything about you, almost instantly—for decades.

    Yeah. Decades.

    Ever wonder how, three seconds before a terrorist attack, we know nothing, but three seconds after, we suddenly know their full bio, travel record, high school GPA, what they had for breakfast, the lap dance they got the night before, and the last time they took a dump?

    Yeah. Data collection isn’t the problem. It never has been. The problem is, and always has been, connecting the dots.

    The U.S. government vacuums up data 24/7. Some of it legally. Some of it… less so. And under the Trump Regime, let’s be honest—we’re not exactly seeing a culture of legal compliance over at DHS, the FBI, or anywhere else. Unless Pete Hegseth adds a hooker or a media executive to a Signal thread and it leaks, we’re not going to know what they’re doing.

    But the safest bet? Assume Title 50 is out the f*ing window.

    For the uninitiated: Title 50 governs U.S. intelligence operations. It prohibits turning that machinery—especially things like NTM (national technical means) or the NSA’s bulk intercept tools—against Americans.

    There are exceptions. But they’re narrow. And they exist precisely to prevent a situation where someone like TACO can ruin your life because he’s having a hissy fit and is enabled by a ketamine-fueled choade with root access.

    And yet—here we are.

    Code Dependent
    Substack, The One Percent RuleJune 2, 2025

    In an era where the rhetoric of innovation is indistinguishable from statecraft, Code Dependent does not so much warn as it excavates. Madhumita Murgia has not written a treatise. She has offered evidence, damning, intimate, unignorable. Her subject is not artificial intelligence, but the human labor that props up its illusion: not the circuits, but the sweat.

    Reading her work is like entering a collapsed mine: you feel the pressure, the depth, the lives sealed inside. She follows the human residue left on AI’s foundations, from the boardrooms of California where euphemism is strategy, to the informal settlements of Nairobi and the fractured tenements of Sofia. What emerges is not novelty, but repetition: another economy running on extraction, another generation gaslit into thinking the algorithm is neutral. AI, she suggests, is simply capitalism’s latest disguise. And its real architects, the data annotators, the moderators, the ‘human-in-the-loop’, remain beneath the surface, unthanked and profoundly necessary.

    The subtitle might well have been The Human Infrastructure of Intelligence. The first revelation is that there is no such thing as a purely artificial intelligence. The systems we naively describe as autonomous are, in fact, propped up by an army of precarious, low-wage workers, annotators, moderators, cleaners of the digital gutters. Hiba in Bulgaria. Ian in Kibera. Ala, the beekeeper turned dataset technician. Their hands touch the data that touches our lives. They are not standing at the edge of technological history; they are kneeling beneath it, holding it up. Many of these annotators are casually employed as gig workers by the US$ 15 billion valued Scale.AI.

    i
    Data Engineering Weekly #226
    Ananth PackkilduraiJune 30, 2025

    Anthropic: How we built our multi-agent research system

    Anthropic writes about its Claude’s Research feature using a multi-agent system that distributes research tasks across specialized subagents via an orchestrator-worker pattern. The architecture boosts performance by parallelizing exploration and token usage, with key insights into prompt engineering (delegation, scaling, tool design), evaluation (LLM-as-judge, human-in-the-loop), and production hardening (stateful runs, debugging, orchestration).

    LinkedIn: Introducing Northguard and Xinfra: scalable log storage at LinkedIn

    LinkedIn unveils Northguard, a Kafka replacement built to handle over 32 trillion daily records by addressing scalability, operability, and durability challenges at hyperscale. Northguard introduces a sharded log architecture with minimal global state, decentralized coordination (via SWIM), and log striping for balanced load, backed by a pluggable storage engine using WALs, Direct I/O, and RocksDB LinkedIn developed Xinfra—a virtualized Pub/Sub layer with dual-write and staged topic migration to enable seamless migration, ensuring zero-downtime interoperability between Kafka and Northguard.

    Canva: Measuring Commercial Impact at Scale at Canva

    Canva writes about its internal app “IMPACT,” a Streamlit-on-Snowflake app that automates measurement of business metrics like MAU and ARR across 1,800+ annual experiments. Built with Snowpark, Cortex, and the Snowflake Python connector, the app replaces manual, error-prone analysis with a self-serve interface that aligns with finance models, supports pre/post-experiment workflows, and stores results for downstream use. Its modular architecture and PR-driven dev workflow enable scalable collaboration, while natural language summaries and scheduled metric calculations streamline impact analysis from hours to minutes.

    Young People Face a Hiring Crisis. AI Is Making It Worse.
    Derek Thompson SubstackJune 26, 2025

    Artificial intelligence is transforming the entire pipeline from college to the workforce: from tests and grades to job applications and entry-level work.

    This is a hard time to be a young person looking for a job. The unemployment rate for recent college graduates has spiked to recession levels, while the overall jobless rate remains quite low. By some measures, the labor market for recent grads hasn’t been this relatively weak in many decades. What I’ve called the “new grad gap”—that is, the difference in unemployment between recent grads and the overall economy—just set a modern record.

    In a recent article, I offered several theories for why unemployment might be narrowly affecting the youngest workers. The most conventional explanation is that the labor market gradually weakened after the Federal Reserve raised interest rates. White-collar companies that expanded like crazy during the pandemic years have slowed down hiring, and this snapback has made it harder for many young people to grab that first rung on the career ladder.

    But another explanation is too tantalizing to ignore: What if it’s AI? Tools like ChatGPT aren’t yet super-intelligent. But they are super nimble at reading, synthesizing, looking stuff up, and producing reports—precisely the sort of things that twentysomethings do right out of school. As I wrote:

    Reenvisioning the college major

    Assuming the requirement for students to complete a major in order to earn a degree, colleges can also allow students to bundle smaller modules – such as variable-credit minors, certificates or course sequences – into a customizable, modular major.

    This lets students, guided by advisers, assemble a degree that fits their interests and goals while drawing from multiple disciplines. A few project-based courses can tie everything together and provide context.

    Such a model wouldn’t undermine existing majors where demand is strong. For others, where demand for the major is declining, a flexible structure would strengthen enrollment, preserve faculty expertise rather than eliminate it, attract a growing number of nontraditional students who bring to campus previously earned credentials, and address the financial bottom line by rightsizing curriculum in alignment with student demand.

    https://www.youtube.com/watch?v=u3sFf-Y3KfU&ab_channel=AndreasKretz

    In this podcast episode, I’m joined by Simon Späti, long-time BI and data engineering expert turned full-time technical writer and author of the living book Data Engineering Design Patterns.

    We talk about:

    His 20-year journey from SQL-heavy BI to modern Data Engineering

    ➡️ Why switching from employee to full-time author wasn’t planned, but necessary

    ➡️ How he uses a “Second Brain” system to manage and publish his knowledge

    ➡️ Why writing is a tool for learning — not just sharing

    The concept of convergent evolution in data tooling: when old and new solve the same problem

    The underrated power of data modeling and pattern recognition in a hype-driven industry Simon also shares practical advice for building your own public knowledge base, and why Markdown and simplicity still win in the long run. Whether you’re into tools, systems, or lifelong learning, this one’s a thoughtful deep dive.

    Data Engineering Weekly #225
    Data Engineering WeeklyJune 23, 2025

    Uber: The Evolution of Uber’s Search Platform

    Shopify: Introducing Roast – Structured AI Workflows Made Easy

    Sem Sinchenko: Why Apache Spark is often considered as slow

    Meta: Collective Wisdom of Models: Advanced Feature Importance Techniques at Meta

    From Docker to Dagger (w/ Solomon Hykes)
    The Analytics Engineering Podcast, Dan PoppyJune 22, 2025 (48:49)

    https://roundup.getdbt.com/p/from-docker-to-dagger-w-solomon-hykes

    In this season of the Analytics Engineering podcast, Tristan is digging deep into the world of developer tools and databases. There are few more widely used developer tools than Docker. From its launch back in 2013, Docker has completely changed how developers ship applications.

    In this episode, Tristan talks to Solomon Hykes, the founder and creator of Docker. They trace Docker’s rise from startup obscurity to becoming foundational infrastructure in modern software development. Solomon explains the technical underpinnings of containerization, the pivotal shift from platform-as-a-service to open-source engine, and why Docker’s developer experience was so revolutionary.

    The conversation also dives into his next venture Dagger, and how it aims to solve the messy, overlooked workflows of software delivery. Bonus: Solomon shares how AI agents are reshaping how CI/CD gets done and why the next revolution in DevOps might already be here.

    We all love our little language wars: the vitriol, the expletives, the dank memes that accompany them.

    You may recall, as your friendly neighborhood Anonymous Rust Dev, that I’ve been magnanimous and kind to the Python language. This is because I’m above it all – I can find the beauty in any language if I look hard enough, and I’m willing to give credit where it’s due.

    But even I have my breaking point, which I’d like to explore today. How clunky does a language need to be to make itself my foe?

    If You’re New to Data, Read This Before You Build Anything
    SeattleDataGuy’s Newsletter, Ben RogojanJune 19, 2025

    1) Avoid Being The Only Engineer On A Team Early On In Your Career

    2) Hype Won’t Fix Your Business Problems

    3) Solving Business Problems Is Hard

    4) Don’t Skip The Fundamentals

    5) Don’t Be Above Putting in the Work

    6) Ask Why – But Don’t Always Expect A Great Answer

    7) Learn to Communicate with Non-Data People

    AI Mode in Google Search: Redefining User Interaction and SEO
    The AI Journal , Sarah EvansJune 4, 2025

    What’s Next? Predictions for the Future of Search and SEO

    • Visibility Shifts from Keywords to Topical Coverage
    • Personalization Becomes Standard
    • Paid Placements and Monetization in AI Mode
    • The Agency Divide: Traditional vs. Modern Approaches
    • New Analytics and Search Console Features
    • Final Thought: Search Is Now a Conversation

    As AI Mode evolves from an experiment to a default experience, search will feel less like a transaction and more like an ongoing conversation. Success will go to brands that can be present, relevant, and trusted at every step—no matter how the questions change.

    To grow and thrive in the rapidly evolving AI landscape, organizations must strategically invest in their data engineering capabilities.

    In today’s modern digital landscape, businesses are generating heavy data daily which can be processed, analyzed and interpreted for future scalability and growth. This is when AI-driven systems become integral across industries to help create real-time analytics, forecasting and initiating AI-driven automation. Beverly D’Souza, a Data Engineer at Patreon (previously worked at Meta) has played a key role in improving data workflows, processing data at pace and launching machine learning models. Having experience with ETL pipelines, cloud data systems, and AI analytics, she shared, “Building scalable AI-powered data pipelines comes with key challenges and to overcome these obstacles, organizations must implement distributed computing frameworks that can handle large-scale data processing efficiently. Incorporating AI-driven automation helps streamline data processing tasks, making the entire system faster and more efficient.”

    Future-Forward Data Engineering in the age of Agentic AI
    Medium, Adnan Masood, PhD. Adnan MasoodJune 9, 2025

    The Efficacy of Standardized Interfaces in Reducing Manual Intervention, Improving System Resilience, and Enabling Advanced Governance in Enterprise Data Platforms.

    Future Outlook
    Agentic AI, MCP, and A2A are poised to reshape enterprise data platforms in the next 3–5 years. We will likely see middleware for agents become mainstream: platform vendors already integrate these protocols. Systems will evolve from static ETL pipelines into adaptive, self-optimizing networks. For instance, future data lakes might auto-tune storage tiers based on usage patterns discovered by agents, or data warehouses could self-partition hot tables. As Microsoft’s announcements highlight, AI is moving toward an “active digital workforce” [33]: think LLMs that don’t just suggest queries, but execute workflows end-to-end. With A2A, agents from different vendors and clouds will interoperate, breaking current silos. Enterprises will embed AI into governance: agentic systems continuously audit for compliance.

    We may also see advances in model capabilities driving agentic efficiency — e.g., hybrid systems where a symbolic planner guides LLMs, or LLMs with built-in code execution (like Azure’s CUA) making some MCP calls redundant. Standards (MCP, A2A) will likely expand; Google’s A2A is already collaboration with 50+ partners [40], promising broader interoperability. In short, the data platform of the future could sense, reason, and act: as data patterns shift, agents reconfigure pipelines; when costs spike, agents throttle resources; when new regulations arrive, agents update data handling policies. This vision of an adaptive, self-driving data platform is on the horizon thanks to agentic AI and these new protocols.

    Amperity, the AI-powered customer data cloud, today launched Chuck Data, the first AI Agent built specifically for customer data engineering. Chuck uses Amperity’s years of experience and patented identity resolution models, trained on billions of data sets across 400+ enterprise brands, as critical knowledge behind the AI. Chuck runs in the terminal and empowers engineers to quickly understand their data, tag it, and resolve customer identities in minutes – all from within their Databricks lakehouse.

    This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20250609092916/en/

    As pressure mounts to deliver business-ready insights quickly, data engineers are hitting a wall: while infrastructure has modernized, the work of preparing customer data still relies on manual code and brittle rules-based systems. Chuck changes that by enabling data engineers to “vibe code” – using natural language prompts to delegate complex engineering tasks to an AI assistant.

    Data Engineering Transformation with AI Agents
    AIM Research, Manjunatha G May 28, 2025 (12:00)

    https://www.youtube.com/watch?v=LQj4fe7GEJ0&ab_channel=AIMResearch

    This session explores how AI agents are transforming data engineering by automating complex workflows such as data ingestion, transformation, and pipeline orchestration. With real-time analytics and intelligent decision-making becoming critical, AI-driven automation is enabling greater efficiency, scalability, and accuracy in data processes.

    From automated anomaly detection to schema evolution and performance optimization, discover practical use cases that showcase the power of AI in simplifying and future-proofing data engineering strategies. Ideal for data engineers, architects, and AI enthusiasts, this talk offers insights into leveraging AI agents to reduce operational overhead and stay ahead in the era of intelligent automation.

    i
    Data Engineering Digest
    Data Science Council of America and Aggregage, Data Science Council of America and Aggregage.

    Data Engineering Digest brings together the best content for Data Engineering Professionals from the widest variety of industry thought leaders. It is a combined effort of the Data Science Council of America and Aggregage. The goals of the site and newsletter are to:

    Collect High Quality Content – The goal of a content community is to provide a high quality destination that highlights the most recent and best content from top quality sources as defined by the community.

    Provide an Easy to Navigate Site and Newsletter – Our subscribers are often professionals who are not regular readers of the blogs and other sources. They come to the content community to find information on particular topics of interest to them. This links them across to the sources themselves.

    Be a Jump Off Point – To be clear all our sites/newsletters are only jump off points to the sources of the content.

    Help Surface Content that Might Not be Found – It’s often hard to find and understand blog content that’s spread across sites. Most of our audience are not regular subscribers to these blogs and other content sources.

Skip to toolbar