PhysIQ is a transformational leader in applying highly sophisticated technology to solve some of the most pressing problems in healthcare. More specifically, we are forging the frontier of healthcare delivery at the intersection of mobile technology and artificial intelligence. Our team is comprised of veteran technologists and world-class data scientists and our solutions set the market standard for scalability and sophistication. Furthermore, we are implementers with a proven track record of transforming an audacious technological vision into mission-critical solutions for our customers.
- Our core values are simple and are defined by integrity, passion and relentless drive toward solving the impossible.
- We are a team in its purest definition. We all pull on the rope together, in the same direction, with the same intensity.
- Our customers and their patients depend on us to deliver technology that will forever change healthcare. We are literally keeping people out of the hospital. We are changing lives.
In our world, amazing things only happen when people make them happen. If you want to make things happen and do it with a world-class of team of visionaries and doers, we encourage you to apply.
- Express real-time data flows as a directed acyclic graph, transforming raw device telemetry into patient health insights
- Analyze data set usage patterns and leverage storage systems that meet both functional and nonfunctional business requirements
- Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for greater scalability, etc.
- Understand operational complexity of the platform, provide solutions to increase reliability and efficiency and provide insights into system health
- Assist in troubleshooting data-related technical issues and identify proper solutions
- Work in an agile/scrum environment
- 4-year technical degree (computer science or engineering)
- 5+ years developing, maintaining and testing infrastructures for data generation
- Experience working in a distributed systems environment and an understanding of partitioning, consistency, etc. in distributed systems
- Knowledge of multiple database paradigms and data modeling techniques
- Experience with stream-processing systems: Storm, Flink, Spark Streaming, etc.
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases: Riak, Postgres, Cassandra, etc.
- Experience with multiple languages: Java, Kotlin, JVM, Python, C++, Clojure, Scala, etc.
- Eagerness to learn or continue practicing functional programming