Most big data processing frameworks are JVM based. A big gap in such systems is to efficiently map the software layers/patterns to the underlying hardware, especially for newer technologies like Non Volatile Memory (NVM), and remove performance bottlenecks. The Apache Mnemonic project presents abstract models that help resolve memory bottlenecks e.g. SerDe/marshalling, Garbage Collection(GC) performance issues, memory-storage mapping, massive object caching, object sharing across clustering and kernel caching issues. In this talk we present Mnemonic, its architecture and the programming models and their applications (including integrations with Apache Hadoop and Apache Spark).