Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
import { Kafka } from "kafkajs"
export default defineComponent({
props: {
kafka: {
type: "app",
app: "kafka",
}
},
async run({steps, $}) {
const kafka = new Kafka({
brokers: [`${this.kafka.$auth.host}:${this.kafka.$auth.port}`],
});
const consumer = kafka.consumer({ groupId: 'TestGroup' });
await consumer.connect()
await consumer.subscribe({ topic: 'SampleTopic', fromBeginning: true });
let consumedMessage = "";
const eachMessage = async function({ topic, partition, message }){
consumedMessage = message.value.toString();
return consumedMessage;
};
await consumer.run({
eachMessage,
});
const producer = kafka.producer();
await producer.connect()
await producer.send({
topic: 'SampleTopic',
messages: [
{ value: 'Welcome KafkaJS + Pipedream users! '+ new Date().toISOString() },
],
});
await producer.disconnect();
const data = await consumer.describeGroup();
return { consumedMessage, groupDescription: data };
},
})
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow.
This includes installing PyPI packages, within your code without having to manage a requirements.txt
file or running pip
.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}