Quarterly Q – April 2018

Hello READER,

Hope everyone had a successful 1st quarter.  As Q2 kicks off, wanted to share more insight on a newer capability we’re hearing more and more about – Cloud Data Warehousing.  As we all know, infrastructure is starting to move into the cloud at a very fast rate and Data Warehousing is becoming one of those components that companies are honing in on.  Wanted to introduce our friend Frank Bell. Frank runs IT Strategists, a consulting firm helping teams deliver data solutions to their organizations, which include Disney, Ticketmaster, Nissan, Toyota, USAF and Unilever.  He’s very excited to share his experience and insight on this topic with all of us.

Snowflake – Cloud Data Warehousing Revolution

It’s the age of disruption and companies must be agile and data-driven, or it’s quite certain in due time they will be disrupted and replaced.  Of course, this scenario is very scary, but it’s also an exciting time.

As most of us can attest to, data problems and inefficiencies are prevalent within all of our organizations in some capacity.  While Data Warehousing and even Big Data solutions have been around awhile including Teradata, Netezza, Vertica, Impala, Presto, Redshift, and Hadoop, these technologies are very complex to integrate and yet still lead to scaling challenges, slow implementation efforts, remain very costly and none offer sustainable flexibility.

From our experience, most of the organizations we survey, assess, and work with have a ton of data and managing it is getting more and more complex.  Companies are always on the prowl to achieve access to data faster and faster to increase analysis and automation value.  Slow moving technology executives and teams are constantly getting side-stepped from marketing and operations’ teams which are moving data to their own cloud silos.  Data complexity is growing rapidly and companies are experiencing many problems including:

  • Data speed.  Data loading for many businesses is still batch driven and often takes hours and sometimes even days.  Modern businesses just cannot wait this long to analyze and drive automation.
  • Data concurrency problems.  Business users often cannot access the latest data fast enough or have to wait until loading is done.
  • Data sources are more numerous and varied.  (Not just traditional rows/columns but JSON, AVRO, Parquet, etc.)
  • Data is almost always in silos and cannot be cross-referenced.
  • Data access is too often complex.

In addition, data security is a huge concern as breaches continue to increase across the ecosystem.

One of the tools that we feel is imperative to combat these complexities is Snowflake.  First off, Snowflake is easy to use, very fast and handles concurrency issues & limitations effortlessly.  Some of the other efficiencies that Snowflake brings to the table includes:

  • SQL is the most common technical language used.  It’s relatively easy for even business users to pick up versus learning new syntax and languages.
  • Being able to easily load, query, and relate JSON, XML, Parquet, and other sources with relational data make analysis much faster.
  • It allows the creation of entire clones of production in seconds.  No more waiting hours to duplicate content.  This is amazingly efficient for QA and Data Quality.
  • Security is now taken care of for you with security experts.
  • Time Travel eliminates the needs for costly and complex backup operations.  You can even query your previous data table(s) down to the millisecond.
  • Separating Compute and Storage opens up major innovations not available before.
  • Since compute can now be separated, organizations can have isolated workloads for data loading, marketing, operations, data scientists, etc. etc.
  • Paying only for what you use. Now you can effectively size your costs for your workload when you need it. No longer do you have to buy hardware to scale for the maximum use cases.
  • Going to 1/10th the cost of database administration is amazing for TCO. All that expertise you had with indexing, vacuuming, etc. is no longer needed to pay for. It comes as part of it.

We have seen very positive results with Snowflake implementations in a very short amount of time including:

  • 78% cost savings replacing on-prem data warehouse and Hadoop.
  • Implementation time goes from months or years to weeks.
  • ETLs adjust from days to hours or even minutes.

by Frank Bell
Big Data Principal | IT Strategists | www.ITStrategists.com

Elastic Beanstalk And Docker

Problem

At Lykuid we needed a mechanism to ingest customer data. It had to provide high availability and complete isolation, so customers are not impacted by possible downtime, service upgrades, or bugs introduced from other components. This requires an isolated service which would be simple and robust.

We also needed predictable response times and minimal resource constraints. The platform needed to support high concurrency without requiring a large thread or worker pool. In order to do this we needed an application where all I/O is asynchronous.

Solution

We chose Node.js because it provides concurrency without having to manage resource pools. With Node.js we were able to implement our logic in a performant high level language without the concern of being blocked by any outside services.

Elastic Beanstalk is an Amazon-managed service which provides monitoring and auto provisioning. It reduces our maintenance by providing upgrades and auto expanding and shrinking. Elastic Beanstalk also provides log management as well as archival and metric collection. An Amazon provided Docker platform is also included. This allows us to run our application in a containerized environment.

Why Docker with Elastic Beanstalk?

Traditional Elastic Beanstalk deployments use Amazon Linux running Node which runs your application. This ties you to using Amazon’s Node.js version and configuration. By using Docker we are able to customize the Node.js environment and package it with our dependencies. This provides greater control over our application and does not tie us to the constraints of traditional Elastic Beanstalk environments. This method achieves the flexibility to use any of the published Docker base images on Docker hub or other registries.

For this use case, we selected Amazon’s Elastic Beanstalk with the Docker Platform and Elastic Container Registry (ECR). Elastic Beanstalk provides us with a cluster of ingestion nodes spread across multiple availability zones with a managed platform capable of running standard docker images.

Elastic Beanstalk provides us with deployment automation, health monitoring, log and metric collection and auto scaling.

Elastic Beanstalk / Docker Architecture

Elastic Beanstalk / Docker Architecture

A developer writes a Dockerfile which describes how to package his application into a Docker image. This allows him to build the image using the docker build command. He then can tag the image with docker tag and push to ECR using docker push. This image is now housed on Amazon’s infrastructure and is ready to be deployed using Elastic Beanstalk. With the image on ECR, the developer is able to launch a docker Elastic Beanstalk environment and deploy his application by providing a Dockerrun.aws.json file.

Example Dockerfile

FROM node:boron

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app


# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install

COPY . /usr/src/app

EXPOSE 3000
CMD [ "npm", "start" ]

Example Dockerrun.aws.json

{
  "AWSEBDockerrunVersion": "1",
  "Image": {
    "Name": "392939824843.dkr.ecr.us-east-1.amazonaws.com/myproject:0.1",
    "Update": "true"
  },
  "Ports": [
    {
      "ContainerPort": "3000"
    }
  ]
}

Contribution by: Lykuid Blog

Node.js & the Event Loop – by Tim Fulmer, VP of Engineering at HopSkipDrive

Remember when object instantiation was such a big deal, EJBs made sense?  It cost so many CPU cycles to allocate a new object, it made sense to cache objects in a pool and swap state in and out of them. I once saw a system that couldn’t stand up under load because of Object.newInstance.

So, I’ve done a lot with Java.  Enough to have wandered into the depths of the JVM more than a few times.  And friends, I’m never going back.

RAM and virtualized CPUs, cheaply and readily available through your cloud provider of choice, have made the JVM as obsolete as EJB.  Node.JS is one of a new generation of single threaded execution environments helping to serve content.

Intelligent memory allocation strategies and horizontal CPU capacity allows for a stack of functions, each with everything needed right in memory, maximizing CPU cycles on as many CPUs as it takes.  Literally getting things done as fast as possible.

This runtime environment also accurately models what’s happening at the hardware level. Making an HTTP request in code queues up some state on the bus, the CPU pulls a bit to notify the network adapter to make the call, the network adapter reads the packet off the bus and sends it on its way. Later, the network adapter pulls a bit to notify the CPU, and a response is delivered back to the code.

Linux has been asynchronous for many years, which has been making its way up the stack.  Node.JS uses the V8 event loop to accurately model the underlying process, while taking away most concurrency concerns.

Of course, there are some trade-offs.  JavaScript is probably one of the more loosely typed languages in comparison.  There are many things that can be done with JavaScript, which really aren’t good ideas.  Automated code quality tools like JSHint/JSLint can help, and there are some good SaaS tools available as well.

It’s also really easy to make changes in a JavaScript system.  This can of course be a good thing, though can also lead to some very interesting application behavior.  At the code level, we’re used to anything being undefined at any time however when combined with a schema-less Mongo database, and much of the traditional, a strongly typed software development safety net will be missing.  At an overall process level, this makes automated unit and integration tests consisting of a comprehensive regression test suite an absolute must.

This isn’t necessarily a bad thing, add some continuous integration/continuous deployment into one of the auto-scaling cloud providers mentioned above and we’re starting to achieve a very high velocity tech environment.

Are there some things to look out for in a JavaScript system, absolutely.  At the same time, navigating these trade-offs can lead to a fast moving technology solution, scaling linearly and predictably with load. I’m certainly never going back 🙂

Tim Fulmer can be reached at https://www.linkedin.com/in/timfulmer

Develop an App Using Native Code or Use a Hybrid/Web Framework – by Shuki Lehavi, Sr. Director of Engineering at J2

After PhoneGap and Titanium/Appcelerator, Ionic /Drifty is the latest to offer a Hybrid framework for developing cross-platform apps. I thought this is a great opportunity to revisit the fundamental question of “what is the best way to build your app? use Native code or a Hybrid framework?”, and see if the recent evolution of these Hybrid frameworks my change your decision.

Hybrid frameworks promise to solve two problems: 1) allowing Web developers with HTML/JavaScript experience to build apps; and 2) allowing developers to code the app once, and deploy it to many platforms, such as iOS and Android.

In the early years of Android and iOS development, building an app was a pain. Android apps were easy to code using Java, but creating an appealing user experience took a great deal of effort. In contrast, iOS allowed you to create a beautiful user experience, but the coding language (Objective C) was cumbersome and illegible. Hybrid framework sounded like the right idea at that time.

But after I coded my first commercial app using a Hybrid framework, I noticed the following issues:

1) Hybrid was great for ‘Hello World’ apps, but not for full-featured apps: Using the Hybrid framework, we got our first 10 screens up and running in no time. But the problem started when we wanted to do complex tasks such as running threads in the background to load data, gracefully recover from network disconnects, control data caches and more. The app started crashing and we soon realized that no Hybrid framework is as efficient as managing threads and tasks like Native code.

2) There are not enough good Developers (and resources) to support Hybrid framework: When something goes bad with a Native code, there are many developers and online resources you can call. Search oDesk.com for “iOS” and you get 7,731 developers or “Android” and you get 13,770 developers. But if you search for “Titanium” you get 561 developers, and “Ionic” returns 126 developers, 9 of which are in the US.

A search on stackoverflow.com shows similar results. “Objective C” returns 112,000 questions answered, “Android” shows 649,000 questions answered. But a search for “Titanium” returns 11,000 results (most are old) and “Ionic” returns 6,000.

I specifically selected stackoverflow.com and oDesk.com as I regard them as independent sources and when I looked at the websites of some of the Hybrid platforms providers I have noticed very creative numbers. One platform provider claimed “Make your code part of other great mobile apps by publishing them in the Marketplace and sell to the 1.5 million developers”, which is troublesome.

So when your project goes south, and the deadline is approaching, you will be hard pressed to find the experts or resource that can help you. Now, most of these Hybrid platform providers have a Professional Services group, but now we are talking very high hourly rates.

3) Hybrid frameworks aren’t easy to debug: Most Hybrid framework wrap around your HTML/JavaScript code and this code runs inside the Hybrid container. When your app starts having performance or stability issues such as memory leaks or thread locking – debugging becomes a nightmare. Identifying if it’s your app that causing the problem, or the Hybrid container (or the specific way you wrote your app to conform to the Hybrid container) will cost you a lot of time, and money.

For years it seems like Hybrid frameworks were the lesser of all evils.

And then came Swift and Android Studio.

With Swift, Apple made it incredibly simple to build stunning iOS apps. Swift’s syntax is very similar to JavaScript or Java, add the latest improvements of XCode and the extensive online tutorials and you have a winning combination for iPhone development.

With Android Studio, Google finally delivered a fantastic development tool that makes it easy to build user interface and test it on many devices. Add the superb code editing functions of Intelij IDEA and Android development is now also easy.

But what if my app is just a combination of HTML pages altogether? First, you should know that section 2.12 in the App Store Review Guidelines states “Apps that are not very useful, unique, are simply web sites bundled as Apps, or do not provide any lasting entertainment value may be rejected”. Next, I would still suggest that building a simple app with one WebView and your web pages is a better solution than being tied to a Hybrid platform and it’s complexity and release cycles.

But what about cross-device development? one code that runs magically on all devices? Well, depends on how extensive is your app. You see, even the best Hybrid framework, have certain features that may work or not based on the platform. So unless you are building a VERY simple app, you are most likely designing and coding for both iOS and Android.

This was only my experience, to see what others are doing, let’s take a look at the industry. As I looked at the “Showcase” section of these Hybrid platform providers, and searched the app store for reviews of these apps, it became clear that these apps are not well accepted by the users.

Conclusion: as long as there are developers and multiple platforms, there will be developer tools and Hybrid platform. But with the app economy explosion, both Android and iOS are keeping their promise of providing superb tools for developers and are quickly surpassing the Hybrid platforms in today’s market. Will that always be the case? will someone develop a Hybrid platform that is better than Native code? I will let you be the judge of that.

You Should Never Build Native! – by John O’Connor

John O’Connor is an Entrepreneur, Engineer, Educator, Serial CTO, Startup Junkie, and Tech Nerd with social polish. He currently serves as CTO at CardBlanc. In this issue of Tech Splash he shares his insight into the new REACT-native framework and why you should never build native:

Building your application for a platform purely in its native language and API has always been a risky endeavor.

It’s not hard to imagine the horrors of having the core of your entire business run on software that would only work in Windows 95 because it relied on some long-scrapped Windows API (I use this example because I’ve actually seen that happen – and they’re still running it 20 years later).

At it’s core, it is a form of vendor lock-in – one that many platform providers are all-to-happy to encourage.   In the desktop world we’ve already made the move; from .NET’s Common Language Runtime (CLR) to Java’s JVM, savvy programmers understand that portability is important when it comes to ensuring a software product’s survivability.

Except, it seems, when it comes to mobile app development.

I’ve worked with countless CTO’s and VP’s that would argue, to the death, that building native for mobile is better than using a CLR or web-based technology.  It seems the lessons of the past two decades have not been learned.  Or perhaps they’re aiming for a job-security angle.  Either way, it’s silly to think that the same trend of ‘agnosticising’ won’t eventually make it’s move into mobile.  The question is, how and when?

I’ve been a fan of using web technologies for building native apps for a long time.  Web technologies were already designed to solve the problem of cross-platform compatibility, and as WebComponents come ever-closer to reality, the abstraction that we need to build component-based native apps is fast becoming a standard for the web.  I thought WebOS was a brilliant turn and Palm (and later HP) missed the boat by scrapping this obviously useful paradigm.

Last week, building native mobile apps got a lot more interesting.

REACT, Facebook’s very fast open source component-building system for Javascript, just got a major upgrade with React-Native.   In addition to playing nicely with other front-end architectural systems (Backbone, Ember, Angular), developers can now use REACT to build native mobile applications.

Now before you grab your pitchforks and chant “AppCelerator – Xamarin – PhoneGap”, there’s a major difference that makes React-native “not-just-another-web-app-wrapper”.  REACT-native is already being used – on probably the most widely-proliferated app in existence: Facebook’s Mobile App [1].  Gone are the days where Facebook eschews HTML5 and regrets even using it [2].

And REACT itself is not just a web-based technology.  It’s a component-building system that works with ANY imperative view technology (for example, UIKit or Android’s View SDK).  Using a “Virtual DOM” and over-loadable side-effect-free rendering functions means that react is not ON the web, but merely OF the web.  Writing components in an abstract way and allowing the underlying rendering technology to change is how we’ve built portability every time (from C++ compilers to Virtual Machines) and it seems like React has finally brought some semblance of this to the mobile world in a way that has already been proven on a large-scale.
[1] https://code.facebook.com/posts/1014532261909640/react-native-bringing-modern-web-techniques-to-mobile/

[2] http://venturebeat.com/2012/09/11/facebooks-zuckerberg-the-biggest-mistake-weve-made-as-a-company-is-betting-on-html5-over-native/

“When Exactly IS The Right Time To Innovate” by Josh Hatter

Josh Hatter is a technologist and former broadcast and digital operations executive. His experience includes deploying high performance technology infrastructure, designed asset management tools and defined worfklows used in the production of online, broadcast and cinematic content at media companies like TMZ and Revolt TV.  He has overseen Engineering, IT and Systems Administration teams supporting an array of business verticals.  Josh currently provides consulting services throughout the country in addition to advising and mentoring startups.
Fun Fact: Single Google query uses 1,000 computers in 0.2 seconds to retrieve an answer.

When Exactly IS The Right Time To Innovate – by Josh Hatter

Designing and building out greenfield technical operations facilities can be a daunting task.  Creating a technology budget, fighting to retain that budget, and utilizing the inevitable value engineering process to trim down to the absolute essentials is what I usually wind up going through.  Something else to consider is the increasingly fast pace at which technology evolves, resulting in the original design becoming obsolete some time between week one and week thirty six of the job.  Technological evolution can be beneficial by forcing me to innovate in areas that I might not consider.

I have a few criteria I evaluate when considering a non-traditional, bleeding edge or innovative solution to a problem:

  • Does the solution save money?  I mean real money, and not by cutting corners or an unstable environment resulting in an increase in downtime or a lot of overtime support hours.
  • Does the solution save significant time?   Does it save enough time to be worth the risk of being an early (or only) adopter, or maybe using consumer products in an enterprise environment?
  • Does the innovative solution solve a problem unique to my company?
  • Does the solution increase operational flexibility?  If there’s one thing that drives me nuts, it’s throwing money at a product that can only do one very specific function and cannot be used in any other way.  There’s nothing worse than having a storage room full of once-expensive hardware gathering dust.

When building out Revolt TV, there were a variety of challenges that most startups have. One of the bigger and potentially costly problems that needed to be solved was how to interconnect two buildings full of employees and core services located half a block from each other. Early in the design process, I spoke to multiple networking vendors and ISPs about the challenge of connecting two buildings with very high data bandwidth capacity, a dozen baseband video circuits, two dozen audio channels, VoIP, internet services and support for broadcast communication hardware. The bids I received were staggering, some running almost half a million dollars annually to accomplish what I was looking for.

I kept looking. Every vendor I spoke with got to hear about my challenge of connecting the buildings. Some really smart people made some suggestions. We could use line of site microwave, but would be limited to a 1gig pipe per pair of dishes. We could put all production staff in one building for high bandwidth connectivity to core services, and use MPLS or VPN connections for the rest of the staff to access business resources by having internet connectivity at each location.  We could cut back on our operational functionality and requirements. I didn’t hear any viable solution I could take back to the executive team, so I kept networking and seeing what other people were doing.

One day I had lunch with two serious heads; one of the many brilliant engineers I have met over the years, as well as the storage and networking vendor I was using on the project.  The engineer suggested we look at CWDM technology. CWDM, or Course Wavelength-Division Multiplexing, creates dedicated wavelengths, or colors of the light spectrum, to be used with specific services over fiber optic cable.  As my networking vendor and I dug in further, we realized that this was a very economical solution that required less than ten thousand dollars worth of hardware in each building.  A dedicated wavelength per service allowed us to accomplish every single requirement listed above, with room for expansion!  The last hurdle was to get a dedicated point to point, dark fiber circuit between the two buildings.  This was accomplished via a one time commissioning fee for splicing fiber across and up the street, and running the circuit into our spaces in each building.

This innovative solution worked right out of the box.   The cost of hardware and commissioning was less than $50k.  In fact, it worked so well that we did the same thing down to 1 Wilshire, where all fiber and networking services are terminated in Hollywood.  The effect of this install resulted in delivering our secondary video signal to our uplink facility at a fraction of the recurring monthly cost of leasing video fiber from traditional carriers.  It also meant we had no “last mile” costs, and could potentially link directly to any vendor located at 1 Wilshire with a patch cable.

When done right, innovation can decrease OpEx, increase productivity, provide maximum flexibility and growth potential, and make you look like a rock star.  Just make sure that a novel approach to a challenge is being done for good reason, or your team might be spending some long days and nights supporting flaky systems in the name of innovation.

Josh Hatter
www.linkedin.com/pub/josh-hatter/0/177/995

“The Cost of Interruptions” by Eric Wilson

Fun Fact: According to recent studies, interrupting your work to check your email can waste as much as 16 minutes.

On many occasions I have found myself having to explain to those outside of the software engineering world why unplanned interruptions are so, well, disruptive. I have tried to describe the mode of being in the zone, so completely deep in the understanding and comprehension of a task that a phone call, a question or just the need to say ‘hello’ to an engineer in the zone is like pulling out the wrong block during an intense game of Jenga – everything falls down.

To be crystal clear – it is an extremely fragile period of enlightenment.

Much to my delight, Chris Parnin (@chrisparnin) over at ninlabs research did a nice writeup of the effects of interruptions on productivity and focus – accompanied with the requisite scientific rigor. From his post: Based on a analysis of 10,000 programming sessions recorded from 86 programmers using Eclipse and Visual Studio and a survey of 414 programmers (Parnin:10), we found:

  • A programmer takes between 10-15 minutes to start editing code after resuming work from an interruption.
  • When interrupted during an edit of a method, only 10% of times did a programmer resume work in less than a minute.
  • A programmer is likely to get just one uninterrupted 2-hour session in a day.

Brutal. When is the worst time to interrupt an engineer? Research shows that the worst time to interrupt anyone is when they have the highest memory load. Using neural correlates for memory load, such as pupillometry, studies have shown that interruptions during peak loads cause the biggest disruption

I call it ‘being in the zone’ – Chris calls it ‘highest memory load’.

This real cost in lost productivity is a notion I’ve been describing for so many years. I’m glad that it has somewhat been quantified.

Fascinating stuff and a great read. I highly recommend it to those who find engineers to be the grumpy sort. It may just change your opinion.

Eric Wilson,
VP/Head of Product and Technology @ ScoreBig
http://ericwilson.erics.ws/
@ericwilsonsaid

“Salesforce Pivot, from Saas to PaaS” by David Glettner

Since its inception, Salesforce.com has become a widely used and powerful tool. We asked David Glettner to share his insight into how the platform fits in with other CRM and sales tools as well as provide a perspective into his experience of managing a large scale implementation.

Salesforce Pivot, from Saas to PaaS By David Glettner

Salesforce.com is not just for sales team automation anymore. In 15 years it has gone from a simple contact-tracking tool to a full featured platform that entire businesses operate on. Through planning and open communication this platform has the power and functionality to fuel and empower an organization’s growth.

It started out as a way to track leads and accounts of people that were interested in purchasing goods and services. There is almost nothing that can’t be done on this PaaS offering from a pure relational database offering, to a full sales and service platform, coupled with an extremely active development community, you can do even more. From invoice and payment collection, to full customer and partner portals have been constructed and are easily deployed from third party developers.  Within the past few years, organizations have been flocking to this platform to do much more, not only because of the flexibility, but also because of the scalability, extensibility and ease of deployment.

A successful implementation of the Salesforce.com technology relies heavily on a strong understanding of the business goals along with strong collaboration between stakeholders, and should include the following key strategies:

  1. Clearly identify business goals and stakeholders
  2. Documents existing systems and processes
  3. Clearly communicate the solution that is to be implemented, including the mapping of existing process to new process if there is a deviation
  4. Training of how to use the system from the vantage point of the main user types

While the dream is to have a single comprehensive system that houses all information, we find that a myriad of different specialized systems are brought together to accomplish business goals. Unfortunately, the systems usually look more like an alphabet soup, with a combination of multiple systems including CRM, ERP, APIs, etc. Over time, the overhead of different systems leads to an army of specialized consultants or a lot of key employees with indispensable institutional knowledge that are difficult to scale or replace.

So in this day and age when we are making great advances, what does this mean for us? It means that looking back to aspiration, there are many financial packages that allow for the tracking, sales, invoicing and payment collection all happening native within Salesforce.com. It means that while once the construction of a single system was almost a non-starter (due largely to the challenges presented in assembling a team with such a broad range of functional and operational skills), the SalesForce.com platform along with some of their native applications make the customization of a single system viable.

In a recent project that I had engaged in, there was an organization that was operating off of a combination of Salesforce.com, Seibel and PeopleSoft, as part of application sprawl that included 17 custom built applications for data transformation, integration and reporting. After a review and documentation of these systems and their business functionality, we were able to define a solution that allowed for a phased migration of the systems to the Salesforce.com platform, thereby reducing the costly consulting, NetOps, DevOps, hardware and associated overhead expenses.

So as you begin to review your strategic planning initiatives and current budgeting needs, you may want to re-evaluate your core systems and how you can leverage a cloud platform like Salesforce.com to further empower the business stakeholders and provide a smoother TechOps operation, consider your platform options.

David Glettner
Head of Enterprise Salesforce Initiatives @ Internet Brands
www.linkedin.com/in/dglettner

Pin It on Pinterest