Thursday, November 15, 2007

Rendering the Print:the Art of Photography

Digital Raw photography—the use of raw sensor data instead of a camera-processed JPEG as a starting point for photographers—is often a topic of controversy. Photographers make statements such as "my prints look good, I don’t see any need for Raw" and "I adjust my images in Photoshop®; it works just fine" or "all those controls are too much work, I just want my software to match the JPEG." Somewhat complex and widely misunderstood, the Raw workflow was created to return control of the print to the photographer. With traditional film, years and even lifetimes were spent learning the techniques of printing in the darkroom. Modern Raw photography provides even more control with less effort, but some education is still required.
This paper will provide a foundation for the understanding of scene rendering. It will introduce the concepts, history, and tools of printmaking, and express their bearing on modern digital photography. It will demonstrate why you should invest the effort to learn the tools of Raw photography, and most importantly, it will prove there is no single "correct" way to render a print.
State of the art
The advent of the digital SLR has created a renaissance in photography. The capabilities and ease of use exhibited by digital SLRs are amazing. Much of the excitement and popularity driving this renaissance can be traced directly to the quality of the prints produced by digital SLRs. If you expose your subject well and choose the right shooting mode, these cameras will produce a very good-looking print with little effort.
"Point and shoot" is a paradigm engrained in our minds by the marketing of a multi-billion dollar industry. For holiday snapshots, a child’s dance recital, or a high school football game, these cameras produce excellent keepsakes that document and reinforce our memories. With the click of a button, these modern marvels produce a digital file in the form of a JPEG, ready to be printed with no intervention. What more could you ask for?
To reach this stunning level of automation, the camera is making many decisions about how to render a scene and create the print. Translating reality to a photographic print is not a simple task, and the methods used are sophisticated and proprietary. Camera manufacturers have applied years of research and development to the unique algorithms inside each camera. Given a scene, each camera will arrive at a different result. None of these cameras typically deliver results that can objectively be considered "wrong," and in fact each photographer may develop a preference for a camera’s results. Because there are so many different possible ways to render a scene into a print, it becomes important to differentiate between the science of photographic imaging and the art of photography. We will show that a photograph cannot be an accurate reproduction of the light that existed in reality: A photograph is always an interpretation.

The Intelligent Wireless Web: Our Next Generation Web

If we concentrate really hard, would it be that difficult to envision the future of the Web? Perhaps it would just take some basic knowledge, a little imagination, a dash of adventure, and pinch of insight. We might find that it is as easy as one, two, three:
1. We merge the Next Generation Internet (NGI) with Internet2,2. We experiment with interactive intelligent programs, and3. We improve the user interface with speech recognition, while extending connectivity through wireless devices.
That's the future of the Web - a combination of broadband delivery, innocuous interfaces, and ubiquitous access - and all with interactive intelligence. Eventually, all this is not only possible, but highly likely. However, foreseeing such an end point 3 to 7 years in the future, is one thing, and developing a credible scenario for achieving that result is something else. Let's try to put together some of the pieces to fashion the necessary credible scenario.
1. Merging Next-Generation Internet (NGI) and Internet2The Next-Generation Internet (NGI) initiative is a multi-agency Federal research and development program that is developing advanced networking technologies, developing revolutionary applications that require advanced networking, and demonstrating these capabilities on test-beds that are 100 to 1,000 times faster, end-to-end, than today's Internet.
The key distinction between the NGI initiative and Internet2, is that NGI is led by and focuses on the needs of the federal mission agencies, such as DoD, DoE, NASA, NIH and others, while Internet2 is university based through grants.
However, the NGI program focuses on some of the same emphases as Internet2:
Advanced infrastructure development (i.e., networks that can perform at much greater levels than today's commercial Internet).
Advanced applications development.
Research into technologies that will enable advances in infrastructure and applications.
Currently, the Federal NGI initiative and the university-led Internet2 are working together. The National Science Foundation (NSF) has made more than 70 High Performance Connection awards to Internet2 universities. These merit-based awards allow universities to connect to NSF's very high performance Backbone Network Service (vBNS). vBNS connectivity is a key part of NSF's NGI program.
Internet2 universities are establishing gigaPoPs (Gigabit per second Points of Presence) that provide regional connectivity among universities and other organizations. Through the gigaPoPs, universities will connect to NGI networks and other advanced Federal networks, including the vBNS, NASA's Research and Education Network (NREN), DoD's Defense Research and Education Network (DREN), and the Department of Energy's Energy Sciences network (ESnet). The NGI and Internet2 will help ensure that advanced networking services are available on interoperable backbone, regional, and local networks that are competitively provided by multiple vendors.
Progress in this area is proceeding rapidly and we are likely to see significant results in the next 3 to 5 years.
2.0 Experiment with interactive intelligent programs
For the most part, the Web can be considered to be a massive information system with interconnected databases and remote applications providing various services. While these services are becoming more and more user oriented, the concept of smart applications on the Web is still in its infancy. So, how will adding intelligent agents, smart applications, and Artificial Intelligence (AI) programs to Web sites, contribute to the development of the Intelligent Web?
To begin to address this issue, we will have to explore some uncharted territory and face some probing and provocative questions, such as:
How smart are today's Web applications?Information Portals are one of the most sophisticated applications on the Web today. During 1998, the first wave of Internet Portals became very popular. They provided consumers with personalized points of entry to a wide variety of information on the Internet. Examples included; MyYahoo (Yahoo), NetCenter (Netscape), MSN (Microsoft) and AOL.
Subsequently, Enterprise Information Portals (EIP), also called Corporate Portals, provided ready access to information over Intranets and the Internet. Corporate Portals moved beyond the delivery of information, they also provide a way to integrate the many disparate systems and processes that are typically used within an enterprise. Corporate Portals are able to use XML to integrate previously separate legacy systems and to provide a single point of entry to these processes1.
In addition, EIPs now act through state-of-the-art (AI) applications to search, retrieve and repackage data for access centers that tie together people and data. They link e-mail, groupware, workflow, collaboration, and other mission-critical applications. The EIP is in the process of developing into an even more powerful center through component-based applications called Web Services. Web Services use XML standards, frameworks and schema that make up today's most sophisticated applications.
But considering the incredible amount of programming, installing, debugging, and maintenance they require, would you categorize any of these inflexible programs, as truly intelligent?
What is Web intelligence?Intelligence usually refers to the ability to reason, to solve problems, to remember information, or to learn new ideas. In May 1997, IBM's Deep Blue Supercomputer played a defining match with the reigning World Chess Champion, Garry Kasparov. This was the first time a computer had won a complete match against the world's best human chess player. For almost 50 years, researchers in the field of artificial intelligence had pursued just this milestone. The success of Deep Blue and chess programming was important because it successfully employed both types of AI methods: logic and introspection.
For now, the Web consists primarily as a huge number of data nodes (containing texts, pictures, sounds, and video). The data nodes are connected through hyperlinks to form `hyper-networks' that collectively could represent complex ideas and concepts above the level of the individual data. However, the Web does not currently perform many sophisticated tasks with this data. So far, the Web does not have some of the vital ingredients it needs, such as, a global database scheme, or a global error-correcting feedback mechanism, or a logic layer protocol, or a method of adopting Learning Algorithms. As a result, we may say that the Web continues to grow and evolve, but it does not adapt. And adapting is an essential ingredient of learning.
Well, the jury may still be out on defining the Web as intelligent, just yet, (and may be for some time), but we can still consider ways to change the Web to give it the capabilities to adapt and therefore to learn.
How will the Web get smarter?Central to human intelligence is the process of learning or adapting. Likewise, machine learning may be the most important aspect of Artificial Intelligence (AI), including behavior, cognition, symbolic manipulation, and achieving goals. This suggests that AI software should be concerned with being changeable or adaptable. The challenge for AI is to learn capabilities for helping people derive specifically targeted knowledge from diverse information sources, such as, the Web. Subsequently, one of the challenges facing Web Services includes developing a global consensus for an architecture that lets applications (using object-oriented specialized software components) plug into an "application bus" and call a Web AI service.
We can defined a Learning Algorithm as a process that takes a data set from a database as input and after performing its algorithmic operation returns an output statement representing learning. As Web increases the percentage of applications and protocols with Learning Algorithms, we can expect improvements in performance both in quality and type.
The Web may become a learning network through components of AI agents and AI application built with adaptive software languages and connected by Web AI Service Portals.
However, regardless of how AI applications are processed on the Web, a vital challenge will be the establishment of trusted information. The process must build trust of information and will include a form of Information Registration and Validation. This will remain an issue for some time to come.Mobile commerce products are now being integrated with wireless Enterprise portals. The m-Commerce products are designed to work efficiently within customers existing suite of mobile products and services. The m-Commerce products range from Auto registration capabilities, Universal cart & catalog, to e-Wallet product.
The road map for achieving a set of connected applications for data on the Web in the form of a logical web of data is called the Semantic Web. An underlying idea of semantic networks is the ability to resolve the semantics of a particular node by following an arc until a node is found with which the agent is familiar. The Semantic Web, in competition with AI Web Services, forms a basic element of the Intelligent Web (see Figure 1).
Figure 1. Web Architecture - From dumb and static to intelligent and dynamic
Finally, whether learning is achievable from AI Portals at all, remains extremely controversial. The virtue of controversies is however, that they motivate experts into uncovering dormant capabilities in response to challenges to resolve competing paradigms. For example, opportunities for wireless developers and Internet service providers will greatly expanded when they are able to reach all mobile users by developing infra structure that is able to support; any wireless carrier, any wireless network (TDMA, CDMA, etc.), any wireless device (pager, digital cell phone, PDA), any wireless applications, any Web format (WML, etc.), any wireless technology (WAP, SMS, pager, etc.), and any medium (text, audio, text-to-speech, voice recognition or video).Trying to accomplish universal interoperability is a demanding challenge requiring resolution of the competing paradigms; balancing diverse proprietary standards (which stimulate competition) against open standards (which offer universal access).Progress in this area is proceeding and we are likely to see significant results in the next 5 to 7 years.
3.0 Improving the User Interface and Extending Connectivity with Wireless DevicesImagine living your entire life within the confines of a specified region that surrounds you. You could call this region, your Personal Space. As you travel from home to work this designed region travels with you just like a 'bubble.' If you look around this space, how many electronic devices would you see? How many wires would exist? With every new electronic device, you add to the 'cable tangle' around you both at the office and at home. But, now, wireless technology can add connectivity to these devices without the encumbering tangle.
Wirelessly connected devices create a network infrastructure called a Wireless Personal Area Network (WPAN). The obvious application of a WPAN is in the office workspace. With this technology, your essential workspace electronic devices will be wirelessly networked together. These could include your desktop, mobile computer, printer, handheld device, mobile phone, pager, etc. Your personal devices could, for example, wirelessly update your appointment calendar on your office PC. You would have greater flexibility in arranging your office because peripherals would no longer need to be within cable length of the PC. The growth of home automation and smart appliances could also use WPAN applications, just as in the office.
WPAN will also allow devices to work together and share each other's information and services. For example, a Web page can be called up on a small screen, and can then be wirelessly sent to a printer for full size printing. WPAN can even be created in a vehicle via devices such as wireless headsets, microphones and speakers for communications. Additional, wireless devices may eventually be embedded throughout public places to provide continuous connectivity as you travel within your Personal Space from one location to another.
In today's environment, information (such as that available from AI Web Service Providers) is one of our most valuable commodities and small, cheap and yet powerful devices, may offer universal accessible to vital information. Thus a lifetime of knowledge may be accessed through gateways worn on the body or placed within our Personal Space.
As envisioned, WPAN will allow the user to customize his or her communications capabilities enabling everyday devices to become smart, tether-less devices that spontaneously communicate whenever they are in close proximity (see Figure 2).
Figure 2. Personal Connectivity - From Wired to Wireless
With billions of various devices already in use today, developing multipurpose communications that can receive and transmit compatible signals is a daunting challenge. At the local level, Personal Area Networks (PAN) form device-to-device interfaces at work and at home. At the global level, we must adapt an interlacing complex of networks (such as integrating NGI and Internet2) to connect compatibly to a large number of possible device-to-device combinations.
The key problems with the small devices available today are; their screens are small with low resolution, their power and memory is limited, and their bandwidth inadequate. Small mobile wireless device computing environments are not able to run large, complex operating systems and applications. Instead, distributed applications, which gain their capabilities from collections of separate devices working in concert, will be necessary. Unlike desktop computers small mobile wireless devices use a variety of processors and operating systems and are programmed in a variety of languages.
One solution to output problems may be larger screens. The extra space could come from a flexible screen that unfolds like a map. Plug it into a pocket PC and you have a workable product. But pocket sized foldable screen technology is still a few years away. An alternative is "electronic ink" technology being developed at E. Ink of Cambridge MA. Electrostatic charges orient white microscopic particles suspended in tiny spheres. Unfortunately, electronic ink is also several years from practical use. Another approach keeps the display small but offers good resolution using magnifying lens mounted on monocular units or goggles. Sony's Glasstron and Eye-Trek from Olympus both give the viewer an image equivalent to a 132-centimeter screen seen from two meters away.
If output over small screens looks troublesome, input problems are even harder. Just think what it's like using keypads from your cell phone to send typed messages. Certainly several cellular-phone manufacturers, including Motorola and Nokia, are trying fledgling speech recognition already in the form of simple "yes" or "no" responses, or in the form of one-word names of stored phone numbers. Speech recognition and speech synthesis offer attractive solutions to overcome the input and output limitations of small mobile devices, if they can overcome their own limitation of memory and processing power through the right balance for the client-server relationship between the small device and nearby embedded resources. The essential components for achieving this balance are new chip designs coupled with open adaptive software. The new chips may provide hardware for small devices that are small, light weight, and consume little power while having the ability to perform applications by downloading adaptive software as needed.
The success of mobile communications lies in its ability to provide instant connectivity anytime, anywhere in a practical and user-friendly manner. If the convergence of the mobile wireless and fixed information networks is to have significance, the quality and speeds available in the mobile environment must begin to match those of the fixed networks. How to build this broadband wireless network is the difficult question. Telecom companies will need to spend billions of dollars to catapult today's narrowband (9.6 kbps) cell-phone infrastructure to achieve broadband capabilities.
Working against broadband access is a fundamental law of data communications. Back in 1948, Claude E. Shannon of Bell Labs, found that the maximum amount of data that can be transmitted through any channel is limited by the available bandwidth (the amount of radio-frequency spectrum it occupies) and its signal-to-noise ratio (the signal to be communicated versus the background interference). The need for high-speed data services and high quality voice transmission under roaming conditions represents significant challenges for wireless communications.
The invasion of digital communications into the wireless world is already in progress. Analog cell phones were found to be useful as a tool, but it is digital phones that have become a mainstay of wireless communications throughout the world.
Today, you can buy a book from Amazon.com, reserve tickets for a concert, or access your company's intranet right from your mobile phone. But technical limitations make it a tedious task. Wallets, such as, Microsoft's Passport and Yahoo!Wallet, simplify and speed up data entry by automatically sending the pertinent information to an e-tailer when a transaction is complete. However, mobile-commerce is more attractive when viewed from the perspective of a longer time horizon.
There are several mobile competitors influencing different regions of the world. The most widely used cellular network technology is GSM (Global System for Mobile Communications), a Time Division Multiple Access (TDMA) system in both Europe and Asia. Unfortunately, TDMA is less adaptable to the Internet's bursty data flows. A key alternative, Code Division Multiple Access (CDMA), faces strong opposition in many quarters.
The cellular phone industry has been unsuccessfully trying to market wireless packet data services to consumers since 1994. At that time, the industry offered packet data overlay on top of the AMPS analog cellular service. But due to bandwidth limitations (i.e. the maximum data throughput with CDPD is approximately 14.4 kbps), wireless data failed to catch on with US consumers. More recent technologies, such as HDML, WAP, Compact-HTML and J2ME have helped solved part of the bandwidth issue by reducing the amount of information sent over wireless, and new wireless packet data technologies (e.g. GPRS, IEEE 802.11) are increasing bandwidth. There remains, however, the problem of what to do when the mobile subscriber unit roams to a different IP subnetwork. Mobile IP offers a solution to this problem by having the router on the home subnetwork "tunnel" IP packets to the mobile at a "care-of address" on the new IP subnet-work. There are also some technical issues to be worked out with Mobile IP, such as firewalls. Mobile IP represents a simple and scalable global mobility solution, but lacks support for fast handoff control, authentication, real-time location tracking, and distributed policy management that is found in cellular networks today. In contrast, third generation cellular systems offer seamless mobility support, but are built on complex connection-oriented networking infrastructure that lacks the flexibility, robustness and scalability that is found in IP networks. Future wireless networks should be capable of combining the strengths of both approaches without inheriting their weaknesses.Progress in this area is proceeding and we are likely to see significant results during the next 3 to 7 years.4.0 ConclusionOur vision for the next generation Web is a result of combining some basic knowledge, a little imagination, a dash of adventure, and pinch of insight to expose three basic steps:
1. We proposed merging the Next Generation Internet (NGI) with Internet2,2. We proposed experimenting with interactive intelligent programs, and3. We proposed improving the user interface with speech recognition, while extending connectivity through wireless devices.
That's our future of the Web - a combination of broadband delivery, innocuous interfaces, and ubiquitous access - and all with interactive intelligence within the next 5 to 7 years.
Whether AI (artificial Intelligence) Web Services will eventually be characterized as contributing to Web "thinking," is considered to be a relatively unimportant question, because whether or not "smart" programs "think", they are already demonstrating that they are useful. The discussion of how to build intelligence to enlighten the optical pathways that inhabit the Web is foreshadowed by the evolution of mobile wireless communications through the efforts of today's innovators.
ReferencesAlesso, H. Peter, and Smith, Craig F., The Intelligent Wireless Web, Addison -Wesley Professional, ISBN: 0201730634, Dec. 2001.

Wednesday, May 9, 2007

best dot net site

Visual Studio .NET IDE


Visual Studio .NET IDE (Integrated Development Environment) is the Development Environment for all .NET based applications which comes with rich features. VS .NET IDE provides many options and is packed with many features that simplify application development by handling the complexities. Visual Studio .NET IDE is an enhancement to all previous IDE’s by Microsoft.

Important Features

One IDE for all .NET Projects

Visual Studio .NET IDE provides a single environment for developing all types of .NET applications. Application’s range from single windows applications to complex n-tier applications and rich web applications.

Option to choose from Multiple Programming Languages

You can choose the programming language of your choice to develop applications based on your expertise in that language. You can also incorporate multiple programming languages in one .NET solution and edit that with the IDE.

IDE is Customizable

You can customize the IDE based on your preferences. The My Profile settings allow you to do this. With these settings you can set the IDE screen the way you want, the way the keyboard behaves and you can also filter the help files based on the language of your choice.

Built-in Browser

The IDE comes with a built-in browser that helps you browse the Internet without launching another application. You can look for additional resources, online help files, source codes and much more with this built-in browser feature.

When we open VS .NET from Start->Programs->Microsoft Visual Studio .NET->Microsoft Visual Studio .NET the window that is displayed first is the Start Page which is shown below. The start Page allows us to select from the most recent projects (last four projects) with which we worked or it can be customized based on your preferences.



The Integrated Development Environment (IDE) shown in the image below is what we actually work with. This IDE is shared by all programming languages in Visual Studio. You can view the toolbars towards the left side of the image along with the Solution Explorer window towards the right.



New Project Dialogue Box

The New Project dialogue box like the one in the image below is used to create a new project specifying it's type allowing us to name the project and also specify it's location on the disk where it is saved. The default location on the hard disk where all the projects are saved is C:\DocumentsandSettings\Administrator\MyDocuments\VisualStudioProjects.



Following are different templates under Project Types and their use.

Windows Application: This template allows to create standard windows based applications.

Class Library: Class libraries are those that provide functionality similar to Active X and DLL by creating classes that access other applications.

Windows Control Library: This allows to create our own windows controls. Also called as User Controls, where you group some controls, add it to the toolbox and make it available to other projects.

ASP .NET Web Application: This allows to create web-based applications using IIS. We can create web pages, rich web applications and web services.

ASP .NET Web Service: Allows to create XML Web Services.

Web Control Library: Allows to create User-defined controls for the Web. Similar to user defined windows controls but these are used for Web.

Console Application: A new kind of application in Visual Studio .NET. They are command line based applications.

Windows Service: These run continuously regardless of the user interaction. They are designed for special purpose and once written, will keep running and come to an end only when the system is shut down.

Other: This template is to develop other kinds of applications like enterprise applications, database applications etc.

Friday, May 4, 2007

What is Green Architecture?

What is Green Architecture?
An introduction to Green Building and a description of the way in which the ARC Design Group strives to meet the goals of Green Building.

First and foremost, a green building serves the needs of the people who inhabit it. It supports and nurtures their health, satisfaction, productivity, and spirit. It requires the careful application of the acknowledged strategies of sustainable architecture — non-toxic construction, the use of durable, natural, resource efficient materials, reliance on the sun for daylighting, thermal and electric power, and recycling of wastes into nutrients. An elegant architectural integration of these strategies produces a building which honors the aspirations of those who use it and engages the natural world. And it must be more.

We recognize that the conversion of our culture to a sustainable basis involves a fundamental transformation of the human spirit. We must rediscover our interconnectedness and interdependence with something much larger than ourselves: the natural world (on the material plane) and the spiritual realm which transcends it. Bo Lozoff has called the first community and the second communion, and he suggests that we must have both to be truly at home in the world.

Community supports sustainability. Certain key strategies of a sustainable society can only be sensibly implemented at a larger scale than a single building. Examples of this in the Northeast bioregion are annual cycle solar energy storage and district heating, solar aquatic waste treatment, bioshelters, and clean cogeneration of electrical and thermal power from biomass. Given this, ARC has been actively involved in community building in the areas of cohousing, sustainable business, and in the architectural design process itself.

We believe that excellence in environmental design can only arise from a truly integrated design team — a community of designers. ARC has taken the first step by forming an architectural partnership which consists of an architect, an engineer, and a designer/builder/businessperson. Ultimately the community of designers must include all the stakeholders in the project — everyone becomes a designer, and contributes a unique wisdom to the whole. The participatory design process becomes a powerful method of community building. It is a central aspect of our work with our clients. Most buildings are designed as a settlement between the various designers, each defending their own turf. The settlement is produced by compliance. But co-operation — inviting others to the table — yields a different result, and we replace the current relay-race approach to design and building with an integrated approach.

We’ve discovered important aspects of this system which are different from the conventional design process, such as:

• Schematic design should occupy roughly 40% of the design rather than the conventional 25% because systems and envelope design are so interrelated with massing and siting;

• The process is messy, thorny, and bumpy, and therefore it takes time. Integrated design requires complex thinking and testing up front, so we need to resist schedule compression;

• You can’t just plug in new technologies — synergies have to be developed, nurtured, and woven into a seamless fabric;

• The construction process requires similar integration. The adversarial, low-bid approach is a disaster, and systematically yields poor results. Builders must be brought into the design process and should be selected using the same criteria we use to select other professionals: for their skill, experience, and integrity.

• Post occupancy concern and tuning is an essential missing factor in the modern day building equation. Green buildings must be commissioned to make certain systems are operating properly, especially as we incorporate technological change and improvement.

Sustainability is so much more than solar heat and non-toxicity. The struggle to achieve it demands that we question each part of the process, while remembering that, as Paul Ehrlich says, the first rule of intelligent tinkering is to save all the parts.

A green building is often described as one which minimizes its negative environmental impact. We seek to turn that around, and aim for buildings which improve the quality of the air and water, produce surplus power and food, and convert waste into nutrients and useful products.

The Community as Client

A critical part of the success of green buildings is the client. A client is almost always made up of a community, and usually several.

One of the strengths we bring to this project is our ability to work with groups and our interest in this process. We don’t make the decisions. We design, we guide, we tug, we forage, and we elicit the community will. "The first responsibility of a leader," writes retired Herman Miller CEO Max duPree, "is to define reality." We help the client community create an insightful sense of current and future reality.

We have devoted ourselves, in recent years, to the discovery and practice of effective ways to help groups articulate who they are and what they need. Through cohousing projects, through employee-owned business ventures, through our work with a Tribal community, and our work with large communities of interest, we are beginning to learn to get the best out of group endeavors. We have witnessed charrette situations which failed to take advantage of the collective intelligence gathered for the purpose. We therefore insist on thoughtful facilitation, whether it be by us or by others whose competence exceeds our own.

As we write, one of our partners, John Abrams, is preparing to leave for an intensive session with Stewart Brand, and others, working to discover how scenario planning — a process developed to help corporations and governments understand the future — can be applied to the design process. And surely the Oberlin project will include a community of participants envisioning the future. Another partner, Bruce Coldham, has just returned from the third North American Cohousing Conference. The simulation game he invented has become a standard training tool for resident-developer cohousing groups (see illustration).

A description of our approach to a recent project may be the best way to convey our sense of the community design process.

The Northeast Sustainable Energy Association has begun a project in Western Massachusetts with far-reaching goals. ARC Design was hired to lead the organization through the visioning and schematic design process for the Northeast Sustainability Center. We began by holding a week-end session with staff, board, and members — about 80 persons — to establish the vision, the dream, the parameters of the project. The workshop organized, filtered, and prioritized the community’s thinking and produced a photographic record of the collective voice of the group. It was a provocative educational experience for the participants.

After the vision workshop, ARC worked with the Building Committee to create the program and a set of design objectives for the project, and then formed a Technical Resource Group. The idea was to engage a group of experts in the various realms of ecological design and request their help in testing our design objectives and suggesting how they might be achieved. From a long list of possibilities, we pared down to 15, from across the country, and one from Norway. We asked them to commit themselves to a one day meeting in the Northeast and an additional 10 hours of consulting time. We would pay their expenses only. None refused the invitation. The workshop was a day long discussion of what this project might be. Seventy five NESEA members attended. The day was a combination of plenary activity and small group work. Ideas were recorded on cards and organized on a giant pinboard. They became the basis for a series of 40 schematic design solution concepts.

All this happened before we had a site. Armed with program, objectives, and design concepts, we needed a place to go. The site search began in earnest. This would be a major test. Did the project have enough appeal that a low-overhead organization with zero equity and a few wild ideas would be able to secure real estate? Who knew? One of the project objectives was to align with local educational institutions. Another was to take advantage of existing buildings and infrastructure. Another was to be the beginning of a sustainable community. Before long, we had three solid offers — a piece of land at Hampshire College, a piece of land at the New England Small Farms Institute facility, and a building on the University of Massachusetts campus. All were attractive; all promised important affiliations and bridge building.

Then something happened. Greenfield, where NESEA has been located, is a small city whose main economic activity (tool and die manufacturing) had died. The town leaders got wind of the project and didn't want NESEA to leave. They offered to donate a 6,000 sq. ft. downtown building for NESEA to re-hab and occupy. Part of the deal was that NESEA would design and become a partner with the town in the construction of an ecological park adjacent to the building. The project extended, and some of its objectives began to be realized (re-hab a building, stimulate economic re-vitalization, integrate building with landscape and urban agriculture).

The most compelling part of the story is the developing relationship with the town of Greenfield. This unexpected twist strengthened the project immensely and sprung directly from a broad, open process that had room for new ideas. All signs point to an enduring and synergistic collaboration between the town and the organization. The park has become an important part of the project and the town’s future. As it proceeds, it may energize downtown Greenfield, it may change the transportation system, it may attract new businesses and visitors, and it may re-awaken Greenfield's strong sense of pride. This is an example of a community design process in action.

Architectural Solutions to Environmental Problems

Practiced this way, green architecture becomes a significant part of the path to a sustainable future. It brings people together in community, and thereby demonstrates and deepens our connection to each other and the natural world. These qualities uplift and nurture the human spirit — they help us discover and honor our purpose and our selves.

The development of a competent architectural integration of ecological principles promises to assist environmental restoration and healing. It also promises buildings that endure. Only buildings that endure — ones that are loved, and cherished, and cared for — will be solutions rather than problems.

Environmental problems are, above all, complex. It takes multi-level, well-linked biological inquiry to achieve understanding and resolution. If the architectural community can offer a systematic collaborative approach to such understanding, we will have built something more dynamic than just buildings. The emergence of a coherent new process might provide an important tail wind to push the long march to sustainability.

advertisement


Thursday, May 3, 2007

VISUAL BASIC INTRODUCTION

Visual Basic .NET

Visual Basic .NET provides the easiest, most productive language and tool for rapidly building Windows and Web applications. Visual Basic .NET comes with enhanced visual designers, increased application performance, and a powerful integrated development environment (IDE). It also supports creation of applications for wireless, Internet-enabled hand-held devices. The following are the features of Visual Basic .NET with .NET Framework 1.0 and Visual Basic .NET 2003 with .NET Framework 1.1. This also answers why should I use Visual Basic .NET, what can I do with it?

Powerful Windows-based Applications

Visual Basic .NET comes with features such as a powerful new forms designer, an in-place menu editor, and automatic control anchoring and docking. Visual Basic .NET delivers new productivity features for building more robust applications easily and quickly. With an improved integrated development environment (IDE) and a significantly reduced startup time, Visual Basic .NET offers fast, automatic formatting of code as you type, improved IntelliSense, an enhanced object browser and XML designer, and much more.

Building Web-based Applications

With Visual Basic .NET we can create Web applications using the shared Web Forms Designer and the familiar "drag and drop" feature. You can double-click and write code to respond to events. Visual Basic .NET 2003 comes with an enhanced HTML Editor for working with complex Web pages. We can also use IntelliSense technology and tag completion, or choose the WYSIWYG editor for visual authoring of interactive Web applications.

Simplified Deployment

With Visual Basic .NET we can build applications more rapidly and deploy and maintain them with efficiency. Visual Basic .NET 2003 and .NET Framework 1.1 makes "DLL Hell" a thing of the past. Side-by-side versioning enables multiple versions of the same component to live safely on the same machine so that applications can use a specific version of a component. XCOPY-deployment and Web auto-download of Windows-based applications combine the simplicity of Web page deployment and maintenance with the power of rich, responsive Windows-based applications.

Powerful, Flexible, Simplified Data Access

You can tackle any data access scenario easily with ADO.NET and ADO data access. The flexibility of ADO.NET enables data binding to any database, as well as classes, collections, and arrays, and provides true XML representation of data. Seamless access to ADO enables simple data access for connected data binding scenarios. Using ADO.NET, Visual Basic .NET can gain high-speed access to MS SQL Server, Oracle, DB2, Microsoft Access, and more.

Improved Coding

You can code faster and more effectively. A multitude of enhancements to the code editor, including enhanced IntelliSense, smart listing of code for greater readability and a background compiler for real-time notification of syntax errors transforms into a rapid application development (RAD) coding machine.

Direct Access to the Platform

Visual Basic developers can have full access to the capabilities available in .NET Framework 1.1. Developers can easily program system services including the event log, performance counters and file system. The new Windows Service project template enables to build real Microsoft Windows NT Services. Programming against Windows Services and creating new Windows Services is not available in Visual Basic .NET Standard, it requires Visual Studio 2003 Professional, or higher.

Full Object-Oriented Constructs

You can create reusable, enterprise-class code using full object-oriented constructs. Language features include full implementation inheritance, encapsulation, and polymorphism. Structured exception handling provides a global error handler and eliminates spaghetti code.

XML Web Services

XML Web services enable you to call components running on any platform using open Internet protocols. Working with XML Web services is easier where enhancements simplify the discovery and consumption of XML Web services that are located within any firewall. XML Web services can be built as easily as you would build any class in Visual Basic 6.0. The XML Web service project template builds all underlying Web service infrastructure.

Mobile Applications

Visual Basic .NET 2003 and the .NET Framework 1.1 offer integrated support for developing mobile Web applications for more than 200 Internet-enabled mobile devices. These new features give developers a single, mobile Web interface and programming model to support a broad range of Web devices, including WML 1.1 for WAP—enabled cellular phones, compact HTML (cHTML) for i-Mode phones, and HTML for Pocket PC, handheld devices, and pagers. Please note, Pocket PC programming is not available in Visual Basic .NET Standard, it requires Visual Studio 2003 Professional, or higher.

COM Interoperability

You can maintain your existing code without the need to recode. COM interoperability enables you to leverage your existing code assets and offers seamless bi-directional communication between Visual Basic 6.0 and Visual Basic .NET applications.

Reuse Existing Investments

You can reuse all your existing ActiveX Controls. Windows Forms in Visual Basic .NET 2003 provide a robust container for existing ActiveX controls. In addition, full support for existing ADO code and data binding enable a smooth transition to Visual Basic .NET 2003.

Upgrade Wizard

You upgrade your code to receive all of the benefits of Visual Basic .NET 2003. The Visual Basic .NET Upgrade Wizard, available in Visual Basic .NET 2003 Standard Edition, and higher, upgrades up to 95 percent of existing Visual Basic code and forms to Visual Basic .NET with new support for Web classes and UserControls.

VB DOT NET IN NIIT-JOIN SOON-BEST CENTER TO LEARN

DEAR VIEWER
VB DOT IS APPLICABLE TO ANYONE WHO HAS AN INTEREST IN OBJECT ORIENTED PROGRAMING
NIIT TEACHES DOT NET LANGUAGES THOROUGHLY