Attribution: OPM vs Sony

I read Top U.S. spy skeptical about U.S.-China cyber agreement based on today’s Senate Armed Services Committee hearing titled United States Cybersecurity Policy and Threats. It contained this statement:

U.S. officials have linked the OPM breach to China, but have not said whether they believe its government was responsible.

[Director of National Intelligence] Clapper said no definite statement had been made about the origin of the OPM hack since officials were not fully confident about the three types of evidence that were needed to link an attack to a given country: the geographic point of origin, the identity of the “actual perpetrator doing the keystrokes,” and who was responsible for directing the act.

I thought this was interesting for several reasons. First, does DNI Clapper mean that the US government has not made an official statement regarding attribution for China and OPM because all “three types of evidence” are missing, or do we have one, or perhaps two? If that is the case, which elements do we have, and not have?

Second, how specific is the “actual perpetrator doing the keystrokes”? Did DNI Clapper mean he requires the Intelligence Community to identify a named person, such that the IC knows the responsible team?

Third, and perhaps most importantly, contrast the OPM case with the DPRK hack against Sony Pictures Entertainment. Assuming that DNI Clapper and the IC applied these “three types of evidence” for SPE, that means the attribution included the geographic point of origin, the identity of the “actual perpetrator doing the keystrokes,” and the identity of the party directing the attack, which was the DPRK. The DNI mentioned “broad consensus across the IC regarding attribution,” which enabled the administration to apply sanctions in response.

For those wondering if the DNI is signalling a degradation in attribution capabilities, I direct you to his statement, which says in the attribution section:

Although cyber operations can infiltrate or disrupt targeted ICT networks, most can no longer assume their activities will remain undetected indefinitely. Nor can they assume that if detected, they will be able to conceal their identities. Governmental and private sector security professionals have made significant advances in detecting and attributing cyber intrusions.

I was pleased to see the DNI refer to the revolution in private sector and security intelligence capabilities.

More here

Posted in Uncategorized | Leave a comment

Are Self-Driving Cars Fatally Flawed?

I read the following in the Guardian story Hackers can trick self-driving cars into taking evasive action.

Hackers can easily trick self-driving cars into thinking that another car, a wall or a person is in front of them, potentially paralysing it or forcing it to take evasive action.

Automated cars use laser ranging systems, known as lidar, to image the world around them and allow their computer systems to identify and track objects. But a tool similar to a laser pointer and costing less than $60 can be used to confuse lidar…

The following appeared in the IEEE Spectrum story Researcher Hacks Self-driving Car Sensors.

Using such a system, attackers could trick a self-driving car into thinking something is directly ahead of it, thus forcing it to slow down. Or they could overwhelm it with so many spurious signals that the car would not move at all for fear of hitting phantom obstacles…

Petit acknowledges that his attacks are currently limited to one specific unit but says, “The point of my work is not to say that IBEO has a poor product. I don’t think any of the lidar manufacturers have thought about this or tried this.” 

I had the following reactions to these stories.

First, it’s entirely possible that self-driving car manufacturers know about this attack model. They might have decided that it’s worth producing cars despite the technical vulnerability. For example, there is no defense in WiFi for jamming the RF spectrum. There are also non-RF jamming methods to disrupt WiFi, as detailed here. Nevertheless, WiFi is everywhere, but lives usually don’t depend on it.

Second, researcher Jonathan Petit appears to have tested an IBEO Lux lidar unit and not a real self-driving car. We don’t know, from the Guardian or IEEE Spectrum articles at least, how a Google self-driving car would handle this attack. Perhaps the vendors have already compensated for it.

Third, these articles may undermine one of the presumed benefits of self-driving cars: that they are supposed to be safer than human drivers. If self-driving car technology is vulnerable to an attack not found in driver-controlled cars, that is a problem.

Fourth, does this attack mean that driver-controlled cars with similar technology are also vulnerable, or will be? Are there corresponding attacks for systems that detect obstacles on the road and trigger the brakes before the driver can physically respond?

Last, these articles demonstrate the differences between safety and security. Safety, in general, is a discipline designed to improve the well-being of people facing natural, environmental, mindless threats. Security, in contrast, is designed to counter intelligent, adaptive adversaries. I am predisposed to believe that self-driving car manufacturers have focused on the safety aspects of their products far more than the security aspects. It’s time to address that imbalance.

More here

Posted in Uncategorized | Leave a comment

Using Samsung’s Internet-Enabled Refrigerator for Man-in-the-Middle Attacks

This is interesting research::

Whilst the fridge implements SSL, it FAILS to validate SSL certificates, thereby enabling man-in-the-middle attacks against most connections. This includes those made to Google’s servers to download Gmail calendar information for the on-screen display.

So, MITM the victim’s fridge from next door, or on the road outside and you can potentially steal their Google credentials.

The notable exception to the rule above is when the terminal connects to the update server — we were able to isolate the URL which is the same used by TVs, etc. We generated a set of certificates with the exact same contents as those on the real website (fake server cert + fake CA signing cert) in the hope that the validation was weak but it failed.

The terminal must have a copy of the CA and is making sure that the server’s cert is signed against that one. We can’t hack this without access to the file system where we could replace the CA it is validating against. Long story short we couldn’t intercept communications between the fridge terminal and the update server.

When I think about the security implications of the Internet of things, this is one of my primary worries. As we connect things to each other, vulnerabilities on one of them affect the security of another. And because so many of the things we connect to the Internet will be poorly designed, and low cost, there will be lots of vulnerabilities in them. Expect a lot more of this kind of thing as we move forward.

More here

Posted in Uncategorized | Leave a comment

AVA: A Social Engineering Vulnerability Scanner

This is interesting:

First, it integrates with corporate directories such as Active Directory and social media sites like LinkedIn to map the connections between employees, as well as important outside contacts. Bell calls this the “real org chart.” Hackers can use such information to choose people they ought to impersonate while trying to scam employees.

From there, AVA users can craft custom phishing campaigns, both in email and Twitter, to see how employees respond. Finally, and most importantly, it helps organizations track the results of these campaigns. You could use AVA to evaluate the effectiveness of two different security training programs, see which employees need more training, or find places where additional security is needed.

Of course, the problem is that both good guys and bad guys can use this tool. Which makes it like pretty much every other vulnerability scanner.

More here

Posted in Uncategorized | Leave a comment

Effect of Hacking on Stock Price, Or Not?

I read Brian Krebs story Tech Firm Ubiquiti Suffers $46M Cyberheist just now. He writes:

Ubiquiti, a San Jose based maker of networking technology for service providers and enterprises, disclosed the attack in a quarterly financial report filed this week [6 August; RMB] with the U.S. Securities and Exchange Commission (SEC). The company said it discovered the fraud on June 5, 2015, and that the incident involved employee impersonation and fraudulent requests from an outside entity targeting the company’s finance department.

“This fraud resulted in transfers of funds aggregating $46.7 million held by a Company subsidiary incorporated in Hong Kong to other overseas accounts held by third parties,” Ubiquiti wrote. “As soon as the Company became aware of this fraudulent activity it initiated contact with its Hong Kong subsidiary’s bank and promptly initiated legal proceedings in various foreign jurisdictions. As a result of these efforts, the Company has recovered $8.1 million of the amounts transferred.”

Brian credits Brian Honan at CSO Online, with noticing the disclosure yesterday.

This is a terrible crime that I would not wish upon anyone. My interest in this issue has nothing to do with Ubiquiti as a company, nor is it intended as a criticism of the company. The ultimate fault lies with the criminals who perpetrated this fraud. The purpose of this post is to capture some details for the benefit of analysis, history, and discussion.

The first question I had was: did this event have an effect on the Ubiquiti stock price? The FY fourth quarter results were released at 4:05 pm ET on Thursday 6 August 2015, after the market closed.

The “Fourth Quarter Financial Summary: listed this as the last bullet:

“GAAP net income and diluted EPS include a $39.1 million business e-mail compromise (“BEC”) fraud loss as disclosed in the Form 8-K filed on August 6, 2015″

I assume the Form 8-K was published simultaneously, with earnings.

Next I found the following in this five day stock chart.

5 day UBNT Chart (3-7 August 2015)

You can see the gap down from Thursday’s closing price, on the right side of the chart. Was that caused by the fraud charge?

I looked to see what the financial press had to say. I found this Motley Fool article titled Why Ubiquiti Networks, Inc. Briefly Fell 11% on Friday, posted at 12:39 PM (presumably ET). However, this article had nothing to say about the fraud.

Doing a little more digging, I saw Seeking Alpha caught the fraud immediately, posting Ubiquiti discloses $39.1M fraud loss; shares -2.9% post-earnings at 4:24 PM (presumably ET).  They noted that “accounting chief Rohit Chakravarthy has resigned.” I learned that the company was already lacking a chief financial officer, so Mr. Chakravarthy was filling the role temporarily. Perhaps that contributed to the company falling victim to the ruse. Could Ubiquiti have been targeted for that reason?

I did some more digging, but it looks like the popular press didn’t catch the issue until Brian Honan and Brian Krebs brought attention to the fraud angle of the earnings release, early today.

Next I listened to the archive of the earnings call. The call was a question-and-answer session, rather than a statement by management followed by Q and A. I listened to analysts ask about head count, South American sales, trademark names, shipping new products, and voice and video. Not until the 17 1/2 minute mark did an analyst ask about the fraud.

CEO Robert J. Pera said he was surprised no one had asked until that point in the call. He said he was embarrassed by the incident and it reflected “incredibly poor judgement and incompetence” by a few people in the accounting department.

Finally, returning to the stock chart, you see a gap down, but recovery later in the session. The market seems to view this fraud as a one-time event that will not seriously affect future performance. That is my interpretation, anyway. I wish Ubiquiti well, and I hope others can learn from their misfortune.

Update: I forgot to add this before hitting “post”:

Ubiquiti had FY fourth quarter revenues of $145.3 million. The fraud is a serious portion of that number. If Ubiquiti had earned ten times that in revenue, or more, would the fraud have required disclosure?

The disclosure noted:

“As a result of this investigation, the Company, its Audit Committee and advisors have concluded that the Company’s internal control over financial reporting is ineffective due to one or more material weaknesses.”

That sounds like code for a Sarbanes-Oxley issue, so I believe they would have reported anyway, regardless of revenue-to-fraud proportions.

More here

Posted in Uncategorized | Leave a comment

Vulnerabilities in Brink’s Smart Safe

Brink’s sells an Internet-enabled smart safe called the CompuSafeGalileo. Despite being sold as a more secure safe, it’s wildly insecure:

Vulnerabilities found in CompuSafe Galileo safes, smart safes made by the ever-reliable Brinks company that are used by retailers, restaurants, and convenience stores, would allow a rogue employee or anyone else with physical access to them to command their doors to open and relinquish their cash….

The hack has the makings of the perfect crime, because a thief could also erase any evidence that the theft occurred simply by altering data in a back-end database where the smartsafe logs how much money is inside and who accessed it.

Nothing about these vulnerabilities is a surprise to anyone who works in computer security:

But the safes have an external USB port on the side of the touchscreens that allows service technicians to troubleshoot and obtain a backup of the database. This, unfortunately, creates an easy entrypoint for thieves to take complete, administrative control of the devices.

“Once you’re able to plug into that USB port, you’re able to access lots of things that you shouldn’t normally be able to access,” Petro told WIRED. “There is a full operating system…that you’re able to…fully take over…and make [the safe] do whatever you want it to do.”

The researchers created a malicious script that, once inserted into a safe on a USB stick, lets a thief automatically open the safe doors by emulating certain mouse and keyboard actions and bypassing standard application controls. “You plug in this little gizmo, wait about 60 seconds, and the door just pops open,” says Petro.

If it sounds like the people who designed this e-safe ignored all of the things we’ve learned about computer security in the last few decades, you’re right. And that’s the problem with Internet-of-Things security: it’s often designed by people who don’t know computer or Internet security.

They also haven’t learned the lessons of full disclosure or rapid patching:

They notified Brinks about the vulnerabilities more than a year ago, but say the company appears to have done nothing to resolve the issues. Although Brinks could disable driver software associated with the USB port to prevent someone from controlling the safes in this way, or lock down the system and database so it’s not running in administrative mode and the database can’t be changed, but so far the company appears to have done none of these.


Again, this all sounds familiar. The computer industry learned its lessons over a decade ago. Before then they ignored security vulnerabilities, threatened researchers, and generally behaved very badly. I expect the same things to happen with Internet-of-Things companies.

More here

Posted in Uncategorized | Leave a comment

Comparing the Security Practices of Experts and Non-Experts

New paper: “‘…no one can hack my mind’: Comparing Expert and Non-Expert Security Practices,” by Iulia Ion, Rob Reeder, and Sunny Consolvo.

Abstract: The state of advice given to people today on how to stay safe online has plenty of room for improvement. Too many things are asked of them, which may be unrealistic, time consuming, or not really worth the effort. To improve the security advice, our community must find out what practices people use and what recommendations, if messaged well, are likely to bring the highest benefit while being realistic to ask of people. In this paper, we present the results of a study which aims to identify which practices people do that they consider most important at protecting their security on-line. We compare self-reported security practices of non-experts to those of security experts (i.e., participants who reported having five or more years of experience working in computer security). We report on the results of two online surveys — ­one with 231 security experts and one with 294 MTurk participants­ — on what the practices and attitudes of each group are. Our findings show a discrepancy between the security practices that experts and non-experts report taking. For instance, while experts most frequently report installing software updates, using two-factor authentication and using a password manager to stay safe online, non-experts report using antivirus software, visiting only known websites, and changing passwords frequently.

More here

Posted in Uncategorized | Leave a comment

Hacking Team’s Purchasing of Zero-Day Vulnerabilities

This is an interesting article that looks at Hacking Team’s purchasing of zero-day (0day) vulnerabilities from a variety of sources:

Hacking Team’s relationships with 0day vendors date back to 2009 when they were still transitioning from their information security consultancy roots to becoming a surveillance business. They excitedly purchased exploit packs from D2Sec and VUPEN, but they didn’t find the high-quality client-side oriented exploits they were looking for. Their relationship with VUPEN continued to frustrate them for years. Towards the end of 2012, CitizenLab released their first report on Hacking Team’s software being used to repress activists in the United Arab Emirates. However, a continuing stream of negative reports about the use of Hacking Team’s software did not materially impact their relationships. In fact, by raising their profile these reports served to actually bring Hacking Team direct business. In 2013 Hacking Team’s CEO stated that they had a problem finding sources of new exploits and urgently needed to find new vendors and develop in-house talent. That same year they made multiple new contacts, including Netragard, Vitaliy Toropov, Vulnerabilities Brokerage International, and Rosario Valotta. Though Hacking Team’s internal capabilities did not significantly improve, they continued to develop fruitful new relationships. In 2014 they began a close partnership with Qavar Security.

Lots of details in the article. This was made possible by the organizational doxing of Hacking Team by some unknown individuals or group.

More here

Posted in Uncategorized | Leave a comment

Remotely Hacking a Car While It’s Driving

This is a big deal. Hackers can remotely hack the Uconnect system in cars just by knowing the car’s IP address. They can disable the brakes, turn on the AC, blast music, and disable the transmission:

The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.

Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep’s brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they’re working on perfecting their steering control — for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep’s GPS coordinates, measure its speed, and even drop pins on a map to trace its route.

In related news, there’s a Senate bill to improve car security standards. Honestly, I’m not sure our security technology is enough to prevent this sort of thing if the car’s controls are attached to the Internet.

More here

Posted in Uncategorized | Leave a comment

Duke APT group’s latest tools: cloud services and Linux support

Recent weeks have seen the outing of two new additions to the Duke group’s toolset, SeaDuke and CloudDuke. Of these, SeaDuke is a simple trojan made interesting by the fact that it’s written in Python. And even more curiously, SeaDuke, with its built-in support for both Windows and Linux, is the first cross-platform malware we have observed from the Duke group. While SeaDuke is a single – albeit cross-platform – trojan, CloudDuke appears to be an entire toolset of malware components, or “solutions” as the Duke group apparently calls them. These components include a unique loader, downloader, and not one but two different trojan components. CloudDuke also greatly expands on the Duke group’s usage of cloud storage services, specifically Microsoft’s OneDrive, as a channel for both command and control as well as the exfiltration of stolen data. Finally, some of the recent CloudDuke spear-phishing campaigns have born a striking resemblance to CozyDuke spear-phishing campaigns from a year ago.

Linux support added with the cross-platform SeaDuke malware

Last week, both Symantec and Palo Alto Networks published research on SeaDuke, a newer addition to the arsenal of trojans being used by the Duke group. While older malware by the Duke group has always been written with a combination of the C and C++ programming languages as well as assembly language, SeaDuke is peculiarly written in Python with multiple layers of obfuscation. This Python code is usually then compiled into Windows executables using py2exe or pyinstaller. However, the Python code itself has been designed to work on both Windows and Linux. We therefore suspect, that the Duke group is also using the same SeaDuke Python code to target Linux victims. This is the first time we have seen the Duke group employ malware to target Linux platforms.

seaduke_crossplatform (39k image)
An example of the cross-platform support found in SeaDuke.

A new set of solutions with the CloudDuke malware toolset

Last week, we also saw Palo Alto Networks and Kaspersky Labs publish research on malware components they respectively called MiniDionis and CloudLook. MiniDionis and CloudLook are both components of a larger malware toolset we call CloudDuke. This toolset consists of malware components that provide varying functionality while partially relying on a shared code framework and always using the same loader. Based on PDB strings found in the samples, the malware authors refer to the CloudDuke components as “solutions” with names such as “DropperSolution”, “BastionSolution” and “OneDriveSolution”. A list of PDB strings we have observed is below:

• C:\DropperSolution\Droppers\Projects\Drop_v2\Release\Drop_v2.pdb
• c:\BastionSolution\Shells\Projects\miniDionis4\miniDionis\obj\Release\miniDionis.pdb
• c:\BastionSolution\Shells\Projects\miniDionis2\miniDionis\obj\Release\miniDionis.pdb
• c:\OneDriveSolution\Shells\Projects\OneDrive2\OneDrive\obj\x64\Release\OneDrive.pdb

The first of the CloudDuke components we have observed is a downloader internally called “DropperSolution”. The purpose of the downloader is to download and execute additional malware on the victim’s system. In most observed cases, the downloader will attempt to connect to a compromised website to download an encrypted malicious payload which the downloader will decrypt and execute. Depending on the way the downloader has been configured, in some cases it may first attempt to log in to Microsoft’s cloud storage service OneDrive and retrieve the payload from there. If no payload is available from OneDrive, the downloader will revert to the previously mentioned method of downloading from compromised websites.

We have also observed two distinct trojan components in the CloudDuke toolset. The first of these, internally called “BastionSolution”, is the trojan that Palo Alto Networks described in their research into “MiniDionis”. Interestingly, BastionSolution appears to functionally be an exact copy of SeaDuke with the only real difference being the choice of programming language. BastionSolution also makes significant use of a code framework that is apparently internally called “Z”. This framework provides classes for functionality such as encryption, compression, randomization and network communications.

bastion_z (12k image)
A list of classes in the BastionSolution trojan, including multiple classes from the “Z” framework.

Classes from the same “Z” framework, such as the encryption and randomization classes, are also used by the second trojan component of the CloudDuke toolset. This second component, internally called “OneDriveSolution”, is especially interesting because it relies on Microsoft’s cloud storage service OneDrive as its command and control channel. To achieve this, OneDriveSolution will attempt to log into OneDrive with a preconfigured username and password. If successful, OneDriveSolution will then proceed to copy data from the victim’s computer to the OneDrive account. It will also search the OneDrive account for files containing commands for the malware to execute.

onedrive_z (7k image)
A list of classes in the OneDriveSolution trojan, including multiple classes from the “Z” framework.

All of the CloudDuke “solutions” use the same loader, a piece of code whose primary purpose is to decrypt the embedded, encrypted solution, load it in memory and execute it. The Duke group has often employed loaders for their malware but unlike the previous loaders they have used, the CloudDuke loader is much more versatile with support for multiple methods of loading and executing the final payload as well as the ability to write to disk and execute additional malware components.

CloudDuke spear-phishing campaigns and similarities with CozyDuke

CloudDuke has recently been spread via spear-phishing emails with targets reportedly including organizations such as the US Department of Defense. These spear-phising emails have contained links to compromised websites hosting zip archives that contain CloudDuke-laden executables. In most cases, executing these executables will have resulted in two additional files being written to the victim’s hard disk. The first of these files has been a decoy, such as an audio file or a PDF file while the second one has been a CloudDuke loader embedding a CloudDuke downloader, the so-called “DropperSolution”. In these cases, the victim has been presented with the decoy file while in the background the downloader has proceeded to download and execute one of the CloudDuke trojans, “OneDriveSolution” or “BastionSolution”.

decoy_ndi_small (63k image)
Example of one of the decoy documents employed in the CloudDuke spear-phishing campaigns. It has apparently been copied by the attackers from here.

Interestingly, however, some of the other CloudDuke spear-phishing campaigns we have observed this July have born a striking resemblance to CozyDuke spear-phishing campaigns seen almost exactly a year ago, in the beginning of July 2014. In both spear-phishing campaigns, the decoy document has been the exact same PDF file, a “US letter fax test page” (28d29c702fdf3c16f27b33f3e32687dd82185e8b). Similarly, the URLs hosting the malicious files have, in both campaigns, purported to be related to eFaxes. It is also interesting to note, that in the case of the CozyDuke-inspired CloudDuke spear-phishing campaign, the downloading and execution of the malicious archive linked to in the emails has not resulted in the execution of the CloudDuke downloader but in the execution of the “BastionSolution” component thereby skipping one step from the process described for the other CloudDuke spear-phishing campaigns.

decoy_fax (72k image)
The “US letter fax test page” decoy employed in both CloudDuke and CozyDuke spear-phishing campaigns.

Increasingly using cloud services to evade detection

CloudDuke is not the first time we have observed the Duke group use cloud services in general and Microsoft OneDrive specifically as part of their operations. Earlier this spring we released research on CozyDuke where we mentioned observing CozyDuke sometimes either directly use a OneDrive account to exfiltrate stolen data or alternatively CozyDuke downloading Visual Basic scripts that would copy stolen files to a OneDrive account and sometimes even retrieve files containing additional commands from the same OneDrive account.

In these previous cases the Duke group has only used OneDrive as a secondary communication channel but still relied on more traditional C&C channels for most of their actions. It is therefore interesting to note that CloudDuke actually enables the Duke group to rely solely on OneDrive for every step of their operation from downloading the actual trojan, passing commands to the trojan and finally exfiltrating stolen data.

By relying solely on 3rd party web services, such as OneDrive, as their command and control channel, we believe the Duke group is trying to better evade detection. Large amounts of data being transferred from an organization’s network to an unknown web server easily raises suspicions. However, data being transferred to a popular cloud storage service is normal. What better way for an attacker to surreptitiously transfer large amounts of stolen data than the same way people are transferring that same data every day for legitimate reasons. (Coincidentally, the implications of 3rd party web services being used as command and control channels is also the subject of an upcoming talk at the VirusBulletin 2015 conference).

Directing limited resources towards evading detection and staying ahead of defenders

Developing even a single multipurpose malware toolset, never mind many, requires time and resources. Therefore it seems logical to attempt to reuse code such as supporting frameworks between different toolsets. The Duke group, however, appear to have taken this a step further with SeaDuke and the CloudDuke component BastionSolution, by rewriting the same code in multiple programming languages. This has the obvious benefits of saving time and resources by providing two malware toolsets, that while similar on the inside, appear completely different on the outside. This way, the discovery of one toolset does not immediately lead to the discovery of the second toolset.

The Duke group, long suspected of ties to the Russian state, have been running their espionage operation for an unusually long time and – especially lately – with unusual brazenness. These latest CloudDuke and SeaDuke campaigns appear to be a clear sign that the Duke’s are not planning to stop any time soon.

Research and post by Artturi (@lehtior2)

F-Secure detects CloudDuke as Trojan:W32/CloudDuke.B and Trojan:W64/CloudDuke.B



Compromised servers used for command and control:


Compromised websites used to host CloudDuke:


On 22/07/15 At 11:59 AM

More here

Posted in Uncategorized | Leave a comment